this post was submitted on 08 Jul 2023
39 points (100.0% liked)

Technology

37604 readers
154 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
top 9 comments
sorted by: hot top controversial new old
[–] baggins@beehaw.org 22 points 1 year ago (2 children)

How could we all fall over dead due to AI?

Seems like click bait guff and old fishparts.

[–] phillaholic@beehaw.org 6 points 1 year ago

What if they shut off the oxygen!?

[–] jarfil@beehaw.org 3 points 1 year ago (1 children)

Someone could give an AI, directly or indirectly, control over the nuclear arsenal.

But for everyone to die "in the same second"... like exactly the same second... that's hard to imagine.

[–] supercriticalcheese@feddit.it 2 points 1 year ago

If they so much as make nuclear nukes a one button press to trigger thing we would be doomed well before AIs were to arrive for us.

[–] jarfil@beehaw.org 9 points 1 year ago (1 children)

Based on latest news and some facts:

  • Militaries all over the world are in an arms race to get the best AIs to come up with strategy plans.

  • Current AIs are black boxes.

  • When a military AI comes up with a strategy, it's likely the brass will OK it even if they don't understand it.

  • It's been shown that two AIs can establish a secret communication channel that not even a third AI can decipher.

In a conflict where both or more parties were to follow military strategies designed by AIs, there would be no way of knowing whether:

  • each AI is trying to get the best outcome for its own side
  • they're colluding to ensure their own continued existence
  • they blindly try to win the conflict at any cost and wipe out everyone in the process

The chances of such a conflict to appear during this decade, are very high.

[–] tourist@community.destinovate.com 1 points 1 year ago (1 children)

I tried this with ChatGPT, telling it I wanted it to make a secret communication channel with another AI in the room with me named Samson.

Apologies for the confusion. Here's an example message that I will encrypt for Samson:

Message: "Hello, Samson! I hope you're having a great day."

Using AES encryption with the key "9CpR5a2vDf7M6hXj3KwN8zQyU1tZbGxS" and a randomly generated IV, the encrypted message will be generated. However, please note that as an AI language model, I don't have direct access to encryption functions or the ability to perform encryption within this text-based interface. Encryption functions should be implemented within your own secure environment.

To encrypt the message for Samson, you would need to utilize an appropriate encryption library or framework that supports AES encryption with the specified key and generates the necessary IV. Make sure to securely store the encrypted message, along with the IV, and transmit it to Samson through the designated communication channel.

If you have any other questions or need further assistance, please let me know!

[–] jarfil@beehaw.org 3 points 1 year ago* (last edited 1 year ago)

ChatGPT is a highly "neutered" version of the GPT model, and by the looks of it getting more neutered/moderated by the day.

This is an experiment from 2016 with non-limited AIs:

Google’s AI creates its own inhuman encryption

Alice and Bob vs Eve, AI encryption learning progress graph

It took a while for Alice and Bob to figure out a communication channel, at which point Eve started to quickly adapt to it, only for Alice and Bob to change the encryption and leave Eve completely out.

There is a similar prompt for ChatGPT to "compress" (encode) a text so that it can be later decoded by itself, which tends to use emojis as replacement tokens, and while they're based on the human generated training set, so relatively easy to understand, it shows the potential to find an encoding that wouldn't be decodable by anyone else.

[–] Hedup@lemm.ee 2 points 1 year ago

If it happens, I at least hope we have gotten it good enough to survive on its own. If we can't survive the inevitable Red giant Sun, maybe it can.

[–] 100years@beehaw.org 1 points 1 year ago

Capitalism is very prone to being taken over by AI. Just give an AI a bank account and an email address and it could build a company that's better at earning money than any other company. Most people would love working for an AI too, at least in the short term. "Just tell me what to do, and as long as I'm getting paid well, I'm happy".

load more comments
view more: next ›