Fuck AI

3149 readers
25 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
1
2
 
 

I want to apologize for changing the description without telling people first. After reading arguments about how AI has been so overhyped, I'm not that frightened by it. It's awful that it hallucinates, and that it just spews garbage onto YouTube and Facebook, but it won't completely upend society. I'll have articles abound on AI hype, because they're quite funny, and gives me a sense of ease knowing that, despite blatant lies being easy to tell, it's way harder to fake actual evidence.

I also want to factor in people who think that there's nothing anyone can do. I've come to realize that there might not be a way to attack OpenAI, MidJourney, or Stable Diffusion. These people, which I will call Doomers from an AIHWOS article, are perfectly welcome here. You can certainly come along and read the AI Hype Wall Of Shame, or the diminishing returns of Deep Learning. Maybe one can even become a Mod!

Boosters, or people who heavily use AI and see it as a source of good, ARE NOT ALLOWED HERE! I've seen Boosters dox, threaten, and harass artists over on Reddit and Twitter, and they constantly champion artists losing their jobs. They go against the very purpose of this community. If I hear a comment on here saying that AI is "making things good" or cheering on putting anyone out of a job, and the commenter does not retract their statement, said commenter will be permanently banned. FA&FO.

3
4
 
 

Alright, I just want to clarify that I've never modded a Lemmy community before. I just have the mantra of "if nobody's doing the right thing, do it yourself". I was also motivated by the decision from u/spez to let an unknown AI company use Reddit's imagery. If you know how to moderate well, please let me know. Also, feel free to discuss ways to attack AI development, and if you have evidence of AIBros being cruel and remorseless, make sure to save the evidence for people "on the fence". Remember, we don't know if AI is unstoppable. AI uses up loads of energy to be powered, and tons of circuitry. There may very well be an end to this cruelty, and it's up to us to begin that end.

5
6
 
 
7
 
 

Source (Bluesky)

8
 
 

Microsoft is prepared to walk away from high-stakes negotiations with OpenAI over the future of its multibillion-dollar alliance, as the ChatGPT maker seeks to convert into a for-profit company.

The software giant has considered halting complex discussions with the $300bn AI start-up if the two sides remain unable to agree on critical issues, such as the size of Microsoft’s future stake in OpenAI, according to people with knowledge of its plans.

In this eventuality, Microsoft would rely on its existing commercial contract to retain access to OpenAI’s technology until 2030, unless there was an offer that was equal to or better than its current arrangements, according to these people.

These people stressed, however, that Microsoft was operating in “good faith” and both parties were meeting daily to try to put a plan on the table and were hopeful a deal could be reached.

“We have a long-term, productive partnership that has delivered amazing AI tools for everyone,” Microsoft and OpenAI said in a joint statement. “Talks are ongoing and we are optimistic we will continue to build together for years to come.”

OpenAI needs a deal with Microsoft to complete a move away from its non-profit origins into a more conventional corporate structure, which it believes will unlock funding and launch an initial public offering.

Microsoft must approve the switch by the end of the year or OpenAI risks losing billions of funding from other investors, including SoftBank.

In discussions over the past year, the two sides have battled over how much equity in the restructured group Microsoft should receive in exchange for the more than $13bn it has invested in OpenAI to date. Discussions over the stake have ranged from 20 per cent to 49 per cent.

The pair are also revising the terms of its wider contract, first drafted when Microsoft invested $1bn into OpenAI in 2019. 

Under its current arrangement, Microsoft has exclusive rights to sell access to OpenAI’s models and receives a 20 per cent share of revenues up to $92bn.

Microsoft is reluctant to give ground on its continued access to OpenAI’s technology or its share of the group’s revenues, according to multiple people close to the discussions.

The Wall Street Journal reported this week that OpenAI had considered a “nuclear option” of accusing Microsoft of anti-competitive behaviour over its partnership.

“Holding out is Microsoft’s nuclear option . . . and they are just making OpenAI sweat,” said one person close to OpenAI, who also argued access to the ChatGPT maker’s IP was necessary for Microsoft to maintain its position in the race to commercialise AI against rivals such as Google and Meta.

One person close to Microsoft said the “status quo” was acceptable for the Big Tech company and that it was “happy with the current contract” and prepared to “run it through” until 2030. 

“The market cares about how much revenue Microsoft is making . . . not about how much equity it owns in OpenAI, [and] this deal moves revenue away from Microsoft,” said another person who has discussed the negotiations with Microsoft executives.

“The question is, what does Microsoft get in return for giving up the right to that revenue?”

Microsoft has already begun diversifying away from OpenAI models in recent months, as part of chief executive Satya Nadella’s belief that leading models will become “commoditised” — or have less value than being able to sell AI-enabled applications and digital assistants built on top of them.

In May, the software giant made Elon Musk’s xAI model Grok available to its cloud computing customers.

“OpenAI is not necessarily the frontrunner anymore,” said one person close to Microsoft, remarking on the competition between rival AI model makers.

Several other elements of the current contract are also up for negotiation, including Microsoft’s exclusive rights to sell OpenAI’s software through its Azure cloud computing service; its right of first refusal to provide computing infrastructure to OpenAI; and the software giant’s access to the AI group’s intellectual property before it reaches “artificial general intelligence”.

The latter clause refers to a point where OpenAI creates a “highly autonomous system that outperforms humans at most economically valuable work” and is likely to be dropped, as the Financial Times previously reported.

OpenAI’s chief executive Sam Altman and its chief financial officer Sarah Friar have also said the company is struggling to access the computing power needed to run ChatGPT, which has raced to 500mn weekly active users worldwide, while also training new models and launching products. 

Two former Microsoft executives involved in managing OpenAI’s compute requirements said the relationship between the groups had frayed significantly over the issue, particularly around Altman’s demands for faster access to even more infrastructure.

Even if the issues are resolved, the transaction will have to be approved by attorneys-general in Delaware and California. The conversion is also subject to a legal challenge from xAI chief Musk, which has been supported by former OpenAI employees.

For OpenAI, getting an agreement with Microsoft is crucial. Investors in the AI group’s past two financing rounds have agreed to provisions that require the company to successfully convert into a for-profit entity or their equity investment becomes debt.

Should this process be delayed or abandoned, investors have the option to claim some of their investment back. SoftBank, which led the most recent round, could cut its $30bn investment by $10bn if the conversion is not completed by the end of the year. People close to OpenAI are confident that investors would retain their commitments, even if the transaction was delayed.

A Silicon Valley veteran close to Microsoft said the software giant “knows that this is not their problem to figure this out, technically, it’s OpenAI’s problem to have the negotiation at all”.

9
 
 

cross-posted from: https://feddit.dk/post/13452853

10
1
Letter to Arc members 2025 (browsercompany.substack.com)
 
 
  1. Webpages won’t be the primary interface anymore. Traditional browsers were built to load webpages. But increasingly, webpages — apps, articles, and files — will become tool calls with AI chat interfaces. In many ways, chat interfaces are already acting like browsers: they search, read, generate, respond. They interact with APIs, LLMs, databases. And people are spending hours a day in them. If you’re skeptical, call a cousin in high school or college — natural language interfaces, which abstract away the tedium of old computing paradigms, are here to stay.

Owner of web browser says webpages are cooked, I guess.

Wow.

11
 
 
12
13
 
 
14
 
 

My work thinks theyre being forward thinking by shoving ai into everything. Not that we are forced to use it but we are encouraged to. Outside of using it to convert a screenshot to text (not even ai...thats just ocr) I haven't had much use for it since it's wrong a lot. Its pretty useless for the type of one off work I do as well. We are supposed to share any "wins" we've had but I'd sooner they stop paying a huge subscription to Sammy A.

15
 
 

human-driven technology goes brrrrr

16
 
 

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task

Project link: https://www.brainonllm.com/

Figures: https://www.brainonllm.com/figures

Paper on arxiv (pdf): https://arxiv.org/pdf/2506.08872

With today's wide adoption of LLM products like ChatGPT from OpenAI, humans and businesses engage and use LLMs on a daily basis. Like any other tool, it carries its own set of advantages and limitations. This study focuses on finding out the cognitive cost of using an LLM in the educational context of writing an essay.

We assigned participants to three groups: LLM group, Search Engine group, and Brain-only group, where each participant used a designated tool (or no tool in the latter) to write an essay. We conducted 3 sessions with the same group assignment for each participant. In the 4th session we asked LLM group participants to use no tools (we refer to them as LLM-to-Brain), and the Brain-only group participants were asked to use LLM (Brain-to-LLM). We recruited a total of 54 participants for Sessions 1, 2, 3, and 18 participants among them completed session 4.

We used electroencephalography (EEG) to record participants' brain activity in order to assess their cognitive engagement and cognitive load, and to gain a deeper understanding of neural activations during the essay writing task. We performed NLP analysis, and we interviewed each participant after each session. We performed scoring with the help from the human teachers and an AI judge (a specially built AI agent).

We discovered a consistent homogeneity across the Named Entities Recognition (NERs), n-grams, ontology of topics within each group. EEG analysis presented robust evidence that LLM, Search Engine and Brain-only groups had significantly different neural connectivity patterns, reflecting divergent cognitive strategies. Brain connectivity systematically scaled down with the amount of external support: the Brain‑only group exhibited the strongest, widest‑ranging networks, Search Engine group showed intermediate engagement, and LLM assistance elicited the weakest overall coupling. In session 4, LLM-to-Brain participants showed weaker neural connectivity and under-engagement of alpha and beta networks; and the Brain-to-LLM participants demonstrated higher memory recall, and re‑engagement of widespread occipito-parietal and prefrontal nodes, likely supporting the visual processing, similar to the one frequently perceived in the Search Engine group. The reported ownership of LLM group's essays in the interviews was low. The Search Engine group had strong ownership, but lesser than the Brain-only group. The LLM group also fell behind in their ability to quote from the essays they wrote just minutes prior.

As the educational impact of LLM use only begins to settle with the general population, in this study we demonstrate the pressing matter of a likely decrease in learning skills based on the results of our study. The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of 4 months, the LLM group's participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring.

17
18
19
20
21
 
 

Who came up with this load of cow residues? Yeah AI plays the role of accelerating the problem so now its at least known. Nobody is doing something about it. Trump wants us all to do less about it since he's an old fart and closer to 100 than the rest of us.

22
 
 

Was using my work laptop that defaulted to google. If I didn't have adblock, the actual result wouldn't even show up without scrolling down.

23
24
 
 

AltAt the top is a screenshot from Wikimedia Commons showing an image that was updated to a larger size with the comment saying "Improved image". Below it is the goose chasing meme with the goose twice asking "Where did the pixels come from?".

25
 
 

My Uber driver was telling me about this company, trying to get a referral. He was saying you get paid $60 for a two hour session where you wear a helmet and type shit out.

Not as much of an ai hater as a lot of the people here but this use case sounds particularly dystopian. So I figure if enough people sign up and just think about random shit and fuck up there data maybe that'll gum up the works long enough for them to run out of money.

view more: next ›