I mean that broadly seems like a good thing. Execution is important, but on paper this seems like the kind of forward thinking policy we need
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
Quite frankly it didn't put enough restrictions on the various "national security" agencies, and so while it may help to stem the tide of irresponsible usage by many of the lesser-impact agencies, it doesn't do the same for the agencies that we know will be the worst offenders (and have been the worst offenders).
“If the benefits do not meaningfully outweigh the risks, agencies should not use the AI,” the memo says. But the draft memo carves out an exemption for models that deal with national security and allows agencies to effectively issue themselves waivers if ending use of an AI model “would create an unacceptable impediment to critical agency operations.”
This tells me that nothing is going to change if people can just say their algoriths would make them too inefficient. Great sentiment but this loophole will make it useless.
This seems to me like an exception that would realistically only apply to the CIA, NSA, and sometimes the FBI. I doubt the Department of Housing and Urban Development will get a pass. Overall seems like a good change in a good direction.
The CIA and NSA are exactly who we don't want using it though.
They're exactly who will carry on using it, even if there weren't any exemptions.
Like either of those agencies will let us know what they are doing in the first place.
At a certain level, there are no rules when they never have to tell what they are doing.
given the "success" of Israel's hi tech border fence it seems like bureacracies think tech will work better than actually, you know, resolving/preventing geopolitical problems with diplomacy and intelligence.
I worry these kind of tech solutions become a predictable crutch. Assuming there is some kind of real necessity to these spy programs (debatable) it seems like reliance on data tech can become a weakness as soon as those intending harm understand how it works
Algorithms that gerrymander voting district boundries might be an early battleground.
Folksy narrator: "Turns out, the U.S. government can not operate without racism."
Great sentiment but
It's not a "great sentiment" - it's essentially just more of the same liberal "let's pretend we care by doing something completely ineffective" posturing and little else.
Democrats are so fucking naive. They actually think that a system of permission slips is sufficient to protect us from the singularity.
OpenAI’s original mission, before they forgot it, was the only workable method: distribute the AI far and wide to establish a multipolar ecosystem.
Hell fucking yea. Who is this Biden guy?
Another W from Biden and extreme leftist will still say he did nothing. 😔
Nah, if this sticks, my extreme leftist ass will sing his praises.
Extremes hate everything, so it doesn’t really matter what they think.
He did something: add national security loopholes
The worst possible offenders aren't really being reined in by this executive order.
I swear to god there has to be an entire chapter in Gödel Escher Bach about how this is literally impossible.
Is it already too late for us? Does anyone truly believe that will be enough to protect us?
Sent to my state representative. Thanks!