this post was submitted on 05 Jun 2024
94 points (100.0% liked)

Technology

37705 readers
82 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] thingsiplay@beehaw.org 81 points 5 months ago (4 children)

How did he calculate the 70% chance? Without an explanation this opinion is as much important as a Reddit post. It's just marketing fluff talk, so people talk about AI and in return a small percentage get converted into people interested into AI. Let's call it clickbait talk.

First he talks about high chance that humans get destroyed by AI. Follows with a prediction it would achieve AGI in 2027 (only 3 years from now). No. Just no. There is a loong way to get general intelligence. But isn't he trying to sell you why AI is great? He follows with:

"We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,"

Ah yes, he does.

[–] LibertyLizard@slrpnk.net 32 points 5 months ago

Insider from OpenAI PR department speaks out!

[–] joelfromaus@aussie.zone 15 points 5 months ago (1 children)

How did he calculate the 70% chance?

Maybe they asked ChatGPT?

[–] MagicShel@programming.dev 12 points 5 months ago

ChatGPT says 1-5%, but I told it to give me nothing but a percentage and it gave me a couple of paragraphs like a kid trying to distract from the answer by surrounding it with bullshit. I think it's onto us...

(I kid. I attribute no sentience or intelligence to ChatGPT.)

[–] eveninghere@beehaw.org 4 points 5 months ago* (last edited 5 months ago)

This is a horoscope trick. They can always say AI destroyed humanity.

Trump won in 2016 and there was Cambridge Analytica doing data analysis: AI technology destroyed humanity!

Israel used AI-guided missiles to attack Gaza: AI destroyed humanity!

Whatever. You can point at whatever catastrophe and there is always AI behind because already in 2014 AI is a basic technology used everywhere.

load more comments (1 replies)
[–] millie@beehaw.org 59 points 5 months ago* (last edited 5 months ago) (6 children)

I think when people think of the danger of AI, they think of something like Skynet or the Matrix. It either hijacks technology or builds it itself and destroys everything.

But what seems much more likely, given what we've seen already, is corporations pushing AI that they know isn't really capable of what they say it is and everyone going along with it because of money and technological ignorance.

You can already see the warning signs. Cars that run pedestrians over, search engines that tell people to eat glue, customer support AI that have no idea what they're talking about, endless fake reviews and articles. It's already hurt people, but so far only on a small scale.

But the profitablity of pushing AI early, especially if you're just pumping and dumping a company for quarterly profits, is massive. The more that gets normalized, the greater the chance one of them gets put in charge of something important, or becomes a barrier to something important.

That's what's scary about it. It isn't AI itself, it's AI as a vector for corporate recklessness.

[–] fwygon@beehaw.org 12 points 5 months ago

It isn’t AI itself, it’s AI as a vector for corporate recklessness.

This. 1000% this. Many of Issac Asimov novels warned about this sort of thing too; as did any number of novels inspired by Asimov.

It's not that we didn't provide the AI with rules. It's not that the AI isn't trying not to harm people. It's that humans, being the clever little things we are, are far more adept at deceiving and tricking AI into saying things and using that to justify actions to gain benefit.

...Understandably this is how that is being done. By selling AI that isn't as intelligent as it is being trumpeted as. As long as these corporate shysters can organize a team to crap out a "Minimally Viable Product" they're hailed as miracle workers and get paid fucking millions.

Ideally all of this should violate the many, many laws of many, many civilized nations...but they've done some black magic with that too; by attacking and weakening laws and institutions that can hold them liable for this and even completely ripping out or neutering laws that could cause them to be held accountable by misusing their influence.

[–] localhost@beehaw.org 7 points 5 months ago (1 children)

I don't think your assumption holds. Corporations are not, as a rule, incompetent - in fact, they tend to be really competent at squeezing profit out of anything. They are misaligned, which is much more dangerous.

I think the more likely scenario is also more grim:

AI actually does continue to advance and gets better and better displacing more and more jobs. It doesn't happen instantly so barely anything gets done. Some half-assed regulations are attempted but predictably end up either not doing anything, postponing the inevitable by a small amount of time, or causing more damage than doing nothing would. Corporations grow in power, build their own autonomous armies, and exert pressure on governments to leave them unregulated. Eventually all resources are managed by and for few rich assholes, while the rest of the world tries to survive without angering them.
If we're unlucky, some of those corporations end up being managed by a maximizer AGI with no human supervision and then the Earth pretty much becomes an abstract game with a scoreboard, where money (or whatever is the equivalent) is the score.

Limitations of human body act as an important balancing factor in keeping democracies from collapsing. No human can rule a nation alone - they need armies and workers. Intellectual work is especially important (unless you have some other source of income to outsource it), but it requires good living conditions to develop and sustain. Once intellectual work is automated, infrastructure like schools, roads, hospitals, housing cease to be important for the rulers - they can give those to the army as a reward and make the rest of the population do manual work. Then if manual work and policing through force become automated, there is no need even for those slivers of decency.
Once a single human can rule a nation, there is enough rich psychopaths for one of them to attempt it.

There are also other AI-related pitfalls that humanity may fall into in the meantime - automated terrorism (e.g. swarms of autonomous small drones with explosive charges using face recognition to target entire ideologies by tracking social media), misaligned AGI going rogue (e.g. the famous paperclip maximizer, although probably not exactly this scenario), collapse of the internet due to propaganda bots using next-gen generative AI... I'm sure there's more.

[–] Juice@midwest.social 4 points 5 months ago (2 children)

Ai doesn't get better. Its completely dependent on computing power. They are dumping all the power into it they can, and it sucks ass. The larger the dataset the more power it takes to search it all. Your imagination is infinite, computing power is not. you can't keep throwing electricity at a problem. It was pushed out because there was a bunch of excess computing power after crypto crashed, or semi stabilized. Its an excuse to lay off a bunch of workers after covid who were gonna get laid off anyway. Managers were like sweet I'll trim some excess employees and replace them with ai! Wrong. Its a grift. It might hang on for a while but policy experts are already looking at the amount of resources being thrown at it and getting weary. The technological ignorance you are responding to, that's you. You don't know how the economy works and you don't know how ai works so you're just believing all this roku's basilisk nonsense out of an overactive imagination. Its not an insult lots of people are falling for it, ai companies are straight up lying, the media is stretching the truth of it to the point of breaking. But I'm telling you, don't be a sucker. Until there's a breakthrough that fixes the resource consumption issue by like orders of magnitude, I wouldn't worry too much about Ellison's AM becoming a reality

[–] verdare@beehaw.org 3 points 5 months ago (1 children)

I find it rather disingenuous to summarize the previous poster’s comment as a “Roko’s basilisk”scenario. Intentionally picking a ridiculous argument to characterize the other side of the debate. I think they were pretty clear about actual threats (some more plausible than others, IMO).

I also find it interesting that you so confidently state that “AI doesn’t get better,” under the assumption that our current deep learning architectures are the only way to build AI systems.

I’m going to make a pretty bold statement: AGI is inevitable, assuming human technological advancement isn’t halted altogether. Why can I so confidently state this? Because we already have GI without the A. To say that it is impossible is to me equivalent to arguing that there is something magical about the human brain that technology could never replicate. But brains aren’t magic; they’re incredibly sophisticated electrochemical machines. It is only a matter of time before we find a way to replicate “general intelligence,” whether it’s through new algorithms, new computing architectures, or even synthetic biology.

[–] Juice@midwest.social 3 points 5 months ago* (last edited 5 months ago)

I wasn't debating you. I have debates all day with people who actually know what they're talking about, I don't come to the internet for that. I was just looking out for you, and anyone else who might fall for this. There is a hard physical limit. I'm not saying the things you're describing are technically impossible, I'm saying they are technically impossible with this version of the tech. Slapping a predictive text generator on a giant database , its too expensive, and it doesn't work. Its not a debate, its science. And not the fake shit run by corporate interests, the real thing based on math.

There's gonna be a heatwave this week in the Western US, and there are almost constant deadly heatwaves in many parts of the world from burning fossil fuels. But we can't stop producing electricity to run these scam machines because someone might lose money.

[–] localhost@beehaw.org 3 points 5 months ago (2 children)

Your opening sentence is demonstrably false. GTP-2 was a shitpost generator, while GPT-4 output is hard to distinguish from a genuine human. Dall-E 3 is better than its predecessors at pretty much everything. Yes, generative AI right now is getting better mostly by feeding it more training data and making it bigger. But it keeps getting better and there's no cutoff in sight.

That you can straight-up comment "AI doesn't get better" at a tech literate sub and not be called out is honestly staggering.

[–] Ilandar@aussie.zone 3 points 5 months ago

That you can straight-up comment “AI doesn’t get better” at a tech literate sub and not be called out is honestly staggering.

I actually don't think it is because, as I alluded to in another comment in this thread, so many people are still completely in the dark on generative AI - even in general technology-themed areas of the internet. Their only understanding of it comes from reading the comments of morons (because none of these people ever actually read the linked article) who regurgitate the same old "big tech is only about hype, techbros are all charlatans from the capitalist elite" lines for karma/retweets/likes without ever actually taking the time to hear what people working within the field (i.e. experts) are saying. People underestimate the capabilities of AI because it fits their political world view, and in doing so are sitting ducks when it comes to the very real threats it poses.

load more comments (1 replies)
[–] 0x815@feddit.de 7 points 5 months ago

Yes. We need human responsibility for everything what AI does. It's not the technology that harms but human beings and those who profit from it.

[–] Ilandar@aussie.zone 5 points 5 months ago

Yes, it's very concerning and frustrating that more people don't understand the risks posed by AI. It's not about AI becoming sentient and destroying humanity, it's about humanity using AI to destroy itself. I think this fundamental misunderstanding of the problem is the reason why you get so many of these dismissive "AI is just techbro hype" comments. So many people are genuinely clueless about a) how manipulative this technology already is and b) the rate at which it is advancing.

[–] coffeetest@beehaw.org 5 points 5 months ago

Calling LLMs, "AI" is one of the most genius marketing moves I have ever seen. It's also the reason for the problems you mention.

I am guessing that a lot of people are just thinking, "Well AI is just not that smart... yet! It will learn more and get smarter and then, ah ha! Skynet!" It is a fundamental misunderstanding of what LLMs are doing. It may be a partial emulation of intelligence. Like humans, it uses its prior memory and experiences (data) to guess what an answer to a new question would look like. But unlike human intelligence, it doesn't have any idea what it is saying, actually means.

[–] newtraditionalists@mastodon.social 5 points 5 months ago (1 children)

@millie @floofloof this is so well articulated I can't stand it. I want to have it printed out and hand it to anyone who asks me anything about AI. Thank you for this!

load more comments (1 replies)
[–] Retiring@lemmy.ml 46 points 5 months ago (5 children)

I feel this is all just a scam, trying to drive the value of AI stocks. Noone in the media seems to talk about the hallucination problem, the problem with limited data for new models (Habsburg-AI), the energy restrictions etc.

It’s all uncritical believe that „AI“ will just become smart eventually. This technology is built upon a hype, it is nothing more than that. There are limitations, and they reached them.

[–] aStonedSanta@lemm.ee 10 points 5 months ago (1 children)

And these current LLMs aren’t just gonna find sentience for themselves. Sure they’ll pass a Turing test but they aren’t alive lol

[–] knokelmaat@beehaw.org 14 points 5 months ago

I think the issue is not wether it's sentient or not, it's how much agency you give it to control stuff.

Even before the AI craze this was an issue. Imagine if you were to create an automatic turret that kills living beings on sight, you would have to make sure you add a kill switch or you yourself wouldn't be able to turn it off anymore without getting shot.

The scary part is that the more complex and adaptive these systems become, the more difficult it can be to stop them once they are in autonomous mode. I think large language models are just another step in that complexity.

An atomic bomb doesn't pass a Turing test, but it's a fucking scary thing nonetheless.

[–] lvxferre@mander.xyz 7 points 5 months ago* (last edited 5 months ago) (1 children)

Habsburg-AI? Do you have an idea on how much you made me laugh in real life with this expression??? It's just... perfect! Model degeneration is a lot like what happened with the Habsburg family's genetic pool.

When it comes to hallucinations in general, I got another analogy: someone trying to use a screwdriver with nails, failing, and calling it a hallucination. In other words I don't think that the models are misbehaving, they're simply behaving as expected, and that any "improvement" in this regard is basically a band-aid being added to humans to a procedure that doesn't yield a lot of useful outputs to begin with.

And that reinforces the point from your last paragraph - those people genuinely believe that, if you feed enough data into a L"L"M, it'll "magically" become smart. It won't, just like 70kg of bees won't "magically" think as well as a human being would. The underlying process is "dumb".

load more comments (1 replies)
[–] averyminya@beehaw.org 5 points 5 months ago

Energy restrictions actually could be pretty easily worked around using analog converting methods. Otherwise I agree completely though, and what's the point of using energy on useless tools. There's so many great things that AI is and can be used for, but of course like anything exploitable whatever is "for the people" is some amalgamation of extracting our dollars.

The funny part to me is that like mentioned "beautiful" AI cabins that are clearly fake -- there's this weird dichotomy of people just not caring/too ignorant to notice the poor details, but at the same time so many generative AI tools are specifically being used to remove imperfection during the editing process. And that in itself is something that's too bad, I'm definitely guilty of aiming for "the perfect composition" but sometimes nature and timing forces your hand which makes the piece ephemeral in a unique way. Shadows are going to exist, background subjects are going to exist.

The current state of marketed AI is selling the promise of perfection, something that's been getting sold for years already. Just now it's far easier to pump out scam material with these tools, something that gets easier with each advancement in these sorts of technologies, and now with more environmental harm than just a victim of a predator.

It really sucks being an optimist sometimes.

[–] darkphotonstudio@beehaw.org 2 points 5 months ago

It could be only hype. But I don't entirely agree. Personally, I believe we are only a few years away from AGI. Will it come from OpenAI and LLMs? Maybe, but it will likely come from something completely different. Like it or not, we are within spitting distance of a true Artificial Intelligence, and it will shake the foundations of the world.

[–] lvxferre@mander.xyz 42 points 5 months ago* (last edited 5 months ago) (10 children)

May I be blunt? I estimate that 70% of all OpenAI and 70% of all "insiders" are full of crap.

What people are calling nowadays "AI" is not a magic solution for everything. It is not an existential threat either. The main risks that I see associated with it are:

  1. Assumptive people taking LLM output for granted, to disastrous outcomes. Think on "yes, you can safely mix bleach and ammonia" tier (note: made up example).
  2. Supply and demand. Generative models have awful output, but sometimes "awful" = "good enough".
  3. Heavy increase in energy and resources consumption.

None of those issues was created by machine "learning", it's just that it synergises with them.

[–] BarryZuckerkorn@beehaw.org 12 points 5 months ago (1 children)

Your scenario 1 is the actual danger. It's not that AI will outsmart us and kill us. It's that AI will trick us into trusting them with more responsibility than the AI can responsibly handle, to disastrous results.

It could be small scale, low stakes stuff, like an AI designing a menu that humans blindly cook. Or it could be higher stakes stuff that actually does things like affect election results, crashes financial markets, causes a military to target the wrong house, etc. The danger has always been that humans will act on the information provided by a malfunctioning AI, not that AI and technology will be a closed loop with no humans involved.

load more comments (1 replies)
load more comments (9 replies)
[–] starman@programming.dev 31 points 5 months ago

OpenAI Insider

Ah, what a reliable and unbiased source

[–] django@discuss.tchncs.de 28 points 5 months ago (1 children)

The energy demand of AI will harm humanity, because we keep feeding it huge amounts of energy produced by burning fossile fuels.

load more comments (1 replies)
[–] darkphotonstudio@beehaw.org 19 points 5 months ago* (last edited 5 months ago) (3 children)

I believe much of our paranoia concerning ai stems from our fear that something will come along and treat us like we treat all the other life on this planet. Which is bitterly ironic considering our propensity for slaughtering each other on a massive scale. The only danger to humanity is humans. If humanity is doomed, it will be our own stupid fault, not AI.

[–] Kichae@lemmy.ca 15 points 5 months ago (3 children)

I think much of it comes from "futurologists" spending too much time smelling each others' farts. These AI guys think so very much of themselves.

load more comments (2 replies)
[–] verdare@beehaw.org 4 points 5 months ago (1 children)

The only danger to humans is humans.

I’m sorry, but this is a really dumb take that borders on climate change denial logic. A sufficiently large comet is an existential threat to humanity. You seem to have this optimistic view that humanity is invincible against any threat but itself, and I do not think that belief is justified.

People are right to be very skeptical about OpenAI and “techbros.” But I fear this skepticism has turned into outright denial of the genuine risks posed by AGI.

I find myself exhausted by this binary partitioning of discourse surrounding AI. Apparently you have to either be a cult member who worships the coming god of the singularity, or think that AI is either impossible or incapable of posing a serious threat.

[–] darkphotonstudio@beehaw.org 3 points 5 months ago

You seem to have this optimistic view that humanity is invincible against any threat but itself

I didn't say that. You're making assumptions. However, I don't take AGI as a serious risk, not directly anyway. AGI is a big question mark at this time and hardly comparable to a giant comet or pandemic, of which we have experience or solid scientific evidence. Could it be a threat? Yeah. Do I personally think so? No. Our reaction to and exploitation of will likely do far more harm than any direct action by an AGI.

[–] flux@lemmyis.fun 2 points 5 months ago (1 children)

But if AI learns from us...

load more comments (1 replies)
[–] technocrit@lemmy.dbzer0.com 15 points 5 months ago

OpenAI ~~Insider~~ Investor Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity

[–] A1kmm@lemmy.amxl.com 15 points 5 months ago (5 children)

I think any prediction based on a 'singularity' neglects to consider the physical limitations, and just how long the journey towards significant amounts of AGI would be.

The human brain has an estimated 100 trillion neuronal connections - so probably a good order of magnitude estimation for the parameter count of an AGI model.

If we consider a current GPU, e.g. the 12 GB GFX 3060, it can hold about 24 billion parameters at 4 bit quantisation (in reality a fair few less), and uses 180 W of power. So that means an AGI might use 750 kW of power to operate. A super-intelligent machine might use more. That is a farm of 2500 300W solar panels, while the sun is shining, just for the equivalent of one person.

Now to pose a real threat against the billions of humans, you'd need more than one person's worth of intelligence. Maybe an army equivalent to 1,000 people, powered by 8,333,333 GPUs and 2,500,000 solar panels.

That is not going to materialise out of the air too quickly.

In practice, as we get closer to an AGI or ASI, there will be multiple separate deployments of similar sizes (within an order of magnitude), and they won't be aligned to each other - some systems will be adversaries of any system executing a plan to destroy humanity, and will be aligned to protect against harm (AI technologies are already widely used for threat analysis). So you'd have a bunch of malicious systems, and a bunch of defender systems, going head to head.

The real AI risks, which I think many of the people ranting about singularities want to obscure, are:

  • An oligopoly of companies get dominance over the AI space, and perpetuates a 'rich get richer' cycle, accumulating wealth and power to the detriment of society. OpenAI, Microsoft, Google and AWS are probably all battling for that. Open models is the way to battle that.
  • People can no longer trust their eyes when it comes to media; existing problems of fake news, deepfakes, and so on become so severe that they undermine any sense of truth. That might fundamentally shift society, but I think we'll adjust.
  • Doing bad stuff becomes easier. That might be scamming, but at the more extreme end it might be designing weapons of mass destruction. On the positive side, AI can help defenders too.
  • Poor quality AI might be relied on to make decisions that affect people's lives. Best handled through the same regulatory approaches that prevent companies and governments doing the same with simple flow charts / scripts.
[–] CanadaPlus@lemmy.sdf.org 4 points 5 months ago* (last edited 5 months ago)

The human brain has an estimated 100 trillion neuronal connections - so probably a good order of magnitude estimation for the parameter count of an AGI model.

Yeah, but a lot of those do things unrelated to higher reasoning. A small monkey is smarter than a moose, despite the moose obviously having way more synapses.

I don't think you can rely on this kind of argument so heavily. A brain isn't a muscle.

[–] darkphotonstudio@beehaw.org 4 points 5 months ago

I think you're right on the money when it comes to the real dangers, especially your first bullet point. I don't necessarily agree with your napkin maths. If the virtual neurons are used in a more efficient way, that could make up for a lot versus human neuron count.

[–] ondoyant@beehaw.org 2 points 5 months ago

Open models is the way to battle that.

This is something I think needs to be interrogated. None of these models, even the supposedly open ones are actually "open" or even currently "openable". We can know the exact weights for every single parameter, the code used to construct it, and the data used to train it, and that information gives us basically no insight into its behavior. We simply don't have the tools to actually "read" a machine learning model in the way you would an open source program, the tech produces black boxes as a consequence of its structure. We can learn about how they work, for sure, but the corps making these things aren't that far ahead of the public when it comes to understanding what they're doing or how to change their behavior.

load more comments (2 replies)
[–] Floey@lemm.ee 14 points 5 months ago (1 children)

This fear mongering is just beneficial to Altman. If his product is powerful enough to be a threat to humanity then it is also powerful enough to be capable of many useful things, things it has not proven itself to be capable of. Ironically spreading fear about its capabilities will likely raise investment, so if you actually are afraid of openai somehow arriving at agi that is dangerous then you should really be trying to convince people of its lack of real utility.

[–] tal@lemmy.today 10 points 5 months ago* (last edited 5 months ago) (1 children)

The guy complaining left the company:

Fed up, Kokotajlo quit the firm in April, telling his team in an email that he had "lost confidence that OpenAI will behave responsibly" as it continues trying to build near-human-level AI.

I don't think that he stands to benefit.

He also didn't say that OpenAI was on the brink of having something like this either.

Like, I don't think all the fighting at OpenAI and people being ejected and such is all a massive choreographed performance. I think that there have been people who really strongly disagree with each other.

I absolutely think that AGI has the potential to post existential risks to humanity. I just don't think that OpenAI is anywhere near building anything capable of that. But if you're trying to build towards such a thing, the risks are something that I think a lot of people would keep in mind.

I think that human level AI is very much technically possible. We can do it ourselves, and we have hardware with superior storage and compute capacity. The problem we haven't solved is the software side. And I can very easily believe that we may get there not all that far in the future. Years or decades, not centuries down the road.

[–] Floey@lemm.ee 3 points 5 months ago

I didn't think it was a choreographed publicity stunt. I just know Altman has used AI fear in the past to keep people from asking rational questions like "What can this actually do?" He obviously stands to gain from people thinking they are on the verge of agi. And someone looking for a new job in the field also has to gain from it.

As for the software thing, if it's done by someone it won't be openai and megacorporations following in its footsteps. They seem insistent at throwing more data (of diminishing quality) and more compute (an impractical amount) at the same style of models hoping they'll reach some kind of tipping point.

[–] KevonLooney@lemm.ee 14 points 5 months ago (2 children)

I just realized something: since most people have no idea what AI is, it could easily be used to scam people. I think that will be it's main function originally.

Like the average person does not have access to real time stock data. You could make a fake AI program that pretends to be a trading algorithm and makes a ton of pretend money as the mark watches. The data would be 100% real and verifiable, just picked a few seconds after the the fact.

Since most people care a lot about money, this will be some of the first widespread applications of real time AI. Just tricking people out of money.

[–] scrubbles@poptalk.scrubbles.tech 15 points 5 months ago (1 children)

Yeah I'll admit I was freaked out at the beginning. So I learned about models, used them, and got familiar with them. Now I'm less freaked out and more "oh my god so many people are going to get scammed/tricked".

Go on Facebook and you'll see it's a good 50-70% AI garbage now. My favorite are "log cabin" and kitchen posts that are just images of them with blanket titles like "wish I lived here" with THOUSANDS of comments of people saying "YES" or "it's so beautiful". Of course it is it has no supports! The cabinets are held up by nothing! There are 9 kinds of lanterns and most are floating. Jesus people are not ready for it.

[–] frog@beehaw.org 9 points 5 months ago

The "Willa Wonka Experience" event comes to mind. The images on the website were so obviously AI-generated, but people still coughed up £35 a ticket to take their kids to it, and were then angry that the "event" was an empty warehouse with a couple of plastic props and three actors trying to improvise because the script they'd been given was AI-generated gibberish. Straight up scam.

[–] 2xsaiko@discuss.tchncs.de 11 points 5 months ago* (last edited 5 months ago) (1 children)

I mean I give it a 100% chance if they are allowed to keep going like this considering the enormous energy and water consumption, essentially slave labor to classify data for training because it's such a huge amount that it would never be financially viable to fairly pay people, and end result which is to fill the internet with garbage.

You really don't need to be an insider to see that.

[–] Railison@aussie.zone 3 points 5 months ago

When I think of AI ruining humanity, this is how I picture it

[–] qqq@programming.dev 2 points 5 months ago

Wake me up when nixpkgs issues decline significantly from 5k+ due to AI.

load more comments
view more: next ›