Why can't we have nice things?
LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests
Emphasis mine. That's a catch-all that will be abused to fuck over projects like SD or Llama that release uncensored.
Regulatory capture via executive order.
So...What we need is a form of cryptocurrency where the process of mining trains the model. No company would own it.
Could we have FINALLY found a use for the blockchain?
Might be possible, but certainly not easy
https://0xparc.org/blog/zk-mnist
I believe this is only for inference, although it may be possible to do it with training as well.
tl;dr 🃏🤡🎈🎪
In this thread are a bunch of people who don't know how executive orders, the executive branch or just our government works claiming some battshit nonsense.
Not sure we are looking at the same thread.
Care to elaborate on what the actual reality of this EO is?
I'm not who you responded to, but it does seem that there's a lot more fear-mongering about the executive order in this thread than actual problems. This is clearly aimed at basic regulation of huge AI. Lots of stuff in here is fairly common sense - help people determine what's created purely by machine vs what isn't so that they're not misinformed or defrauded, reform some data scraping stuff to protect privacy and keep an AI model from getting too much unnecessary personal info in its training (like, it probably doesn't need your home address to train, not that I don't imagine a use for such a thing, but that would need to be regulated).
Read the bullet points, it's not that long, and it's not that hard to understand. People are running around this thread talking about "Download and start torrenting everything OMG!". That's just... not a reasonable or rational take on the substance of the order. I would be concerned about things like the casual mention of predictive policing (though, again, regulation is needed to prevent it, so it's gotta be mentioned as something to be regulated).
But if you're an open source hobbyist or enthusiast, your reaction to this should be a resounding "meh". If you're a researcher or professional, I think I'd pay a lot more attention, but I'm not qualified to tell you whether or not the substance of it is a problem for your field.
Please show me where it specifies 'huge ai'.
Yeah... ok. The order is worded incredibly vaguely, and many of the provisions are very concerning.
We should make model distribution resistant to service disruption even if it's an overreaction.
I would agree with that. I wouldn't do it as an reaction/overreaction to this, but redundancy is always good.
I'll bite... An executive order is literally just that, an order from the executive to their employees.
So let's say that you work at McDonalds, and the CEO (Chief EXECUTIVE Officer) of McDonald's issues an order that the milkshake machine must be cleaned hourly. You do have to clean the milkshake machine hourly or risk disciplinary action. But if you work at Wendy's you don't have to give two shits about what the CEO of McDonald's wants w.r.t. milkshake machines.
Similarly, the president of the USA is the head of the executive branch of the federal government, and an executive order is just a directive to federal employees and federal agencies (i.e. people actually employed directly by the federal government). The president can't make some random citizen or company in this country (with extremely limited exception) do jack shit.
IN THIS CASE, the EO references Title VII of the Defense Production Act
Section 708 of the DPA authorizes the President to consult with representatives of industry, business, financing, agriculture, labor, and other interests to provide for development of
voluntary agreements and plans of action to help provide for the national defense. A
voluntary agreement is an association of private interests, approved by the Government to plan and coordinate actions in support of the national defense. Under Section 708, Participants in a
voluntary agreement are granted relief from antitrust laws .
Voluntary agreements enable cooperation among business competitors to plan and coordinate measures to increase the supply of materials and services needed for military and homeland security purposes. For example, a
voluntary agreement could be used to enable cooperation among suppliers of critical materials and services to plan and carry out emergency preparedness, response, and recovery activities.
I've highlighted the critical parts for you.
---
IMHO, more people need to at least watch the schoolhouse rock video about how bills become laws before voting. The only way to impose a new rule in this country (e.g. something every private company would actually have to follow) is to pass a bill through congress + senate + signed by president (or veto overrided) AND then stand up to judicial review (e.g. supreme court doesn't bounce it for violating the constitution)
I sorted by controversial 😂
I wonder what is look like in argument case later (if happen) it either 1st amadement and The Fourteenth Amendment's Equal Protection Clause , cause there definelty "exception" made such as "defense company" or medical company etc to develop "Unrestricted" Ai for national security and medical learning and advancement.
Health and safety, the best excuse for tyranny we've ever found.
Hope I live to see the day our culture breaks out of the daycare mindset.
Anyone got the text of the actual executive order? From the fact sheet it seems like while it requires AI companies to notify the government about a bunch of things and send them reports there's no actual mechanism to stop or ban anything. Its basically just saying hey if you make a big AI model now you gotta notify us about it and write a report about its flaws. They probably wont even know what they are looking at in the reports and it will be released before they decide to do anything about it. Not to mention companies will put the most positive spin on the report possible. Definitely short term bullish for AI at least IMO. Seems like bureaucratic speak to look like you're regulating AI without actually doing anything significant.
That's what it seems like to me. Not really any sort of alarming policy, I can't imagine them releasing anything less then this. It's just saying if you're training a model, you need to keep us informed and make sure it's safe, and to help us make sure the impending AI future doesn't destroy the economy.
But the cat is out of the bag for large open models, unless there is some sort of massive crackdown. The US is incapable of policing the internet.
EOs have the force of law typically, though their scope can be challenged in court.
Need ways of distributing LLMs besides hugging face
Torrent
I mean BitTorrent has been around forever, we can just use that, plus it's decentralized.
Mistral did a good start with that
For those who are unaware, Mistral was originally released as a raw magnet link. The way things are going, this will be the future of model distribution. Someone out there will develop the Pirate Bay but for AI models.
Just there to prevent future competition,
As someone not an expert oin this area, can someone ELI12 for me why everyone here hates the executive order's contents?
This is unconstitutional. Code is speech. See Bernstein v. United States.