this post was submitted on 07 Oct 2025
265 points (99.6% liked)

Technology

4478 readers
365 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Post guidelines

[Opinion] prefixOpinion (op-ed) articles must use [Opinion] prefix before the title.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 2 years ago
MODERATORS
 

Microsoft Copilot, not so much

top 17 comments
sorted by: hot top controversial new old
[–] slazer2au@lemmy.world 74 points 2 weeks ago (3 children)

And because morons do this, I have to go through yearly mandatory training at work telling people not to do it.

Even more annoying is I actively opt out of the AI bullshit work is pushing.

[–] Semi_Hemi_Demigod@lemmy.world 31 points 2 weeks ago (2 children)

I had to go through yearly mandatory training telling me not to buy potential clients sex workers.

People are fucking idiots and we all pay for it.

Humanity is the worst.

[–] MNByChoice@midwest.social 8 points 2 weeks ago

I did those as well. That and not using personal equipment.

Then at one company, we could use personal equipment. And accepting gifts was suddenly okay. And their training did NOT say I could not buy potential clients sex workers.

I never had the opportunity to clarify with HR...

[–] Rambomst@lemmy.world 3 points 2 weeks ago

I get to go through year training telling me corruption and bribery is bad... Didn't know, it was really eye opening /s

[–] floofloof@lemmy.ca 8 points 2 weeks ago* (last edited 2 weeks ago)

I have to do a couple of hours of multiple choice tests every year answering questions like "Is 'password' a good password?" But because that's a bit high level for many people, it is all presented in the form of "amusing" animations and skits with the questions at the end and no fast forward button.

[–] Saarth@lemmy.world 8 points 2 weeks ago

If companies gonna push AI, I am gonna use AI. It's on the company infosec to ensure my workarounds dont work.

[–] mesamunefire@piefed.social 39 points 2 weeks ago

lol the CEO does this all the time.

I would love to see the txt coming in. It would make stocks very easy to pick!

[–] MakingWork@lemmy.ca 22 points 2 weeks ago (1 children)

Since the average person does not know who has access to prompts put into ChatGPT, they assume it is safe to enter private information and it will be unread by any humans.

I don't even know how safe duck.ai is.

[–] taiyang@lemmy.world 14 points 2 weeks ago

I like duck.ai but yes, anonymous or not, never submit private data to a third party. I often warm my wife about that, I don't think she should even share secrets over SMS, let alone a third party app.

Though, interestingly, her company does have a private server hosting a closed off version of ChatGPT explicitly for this reason, lol.

[–] sik0fewl@lemmy.ca 20 points 2 weeks ago

To be fair, they were doing this before AI, as well.

[–] MoogleMaestro@lemmy.zip 16 points 2 weeks ago

At work we had to remind people recently to not do this. I think it was originally brought up as a joke, but we all felt the need to remind each other just in case anyone was seriously thinking about it.

[–] quick_snail@feddit.nl 12 points 2 weeks ago

Wait till you learn about grammerly

[–] Dsklnsadog@lemmy.dbzer0.com 12 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

If companies like OpenAI were more astute, they would recognize that they can accumulate sufficient information to compete in other markets and evolve into technology firms capable of outpacing their competitors. If managed properly, they could even frame it as a form of reverse engineering. And if everything related to AI were truly just a bubble that eventually burst, it would make little difference whether they began as AI companies, since their ultimate purpose could shift over time. And yes, I am being ironic with “if they were more astute,” because I believe that what I am describing is clearly already being considered.

[–] MNByChoice@midwest.social 15 points 2 weeks ago

I fully expect OpenAI and the others are already doing this. And this is why the USA wants to keep China from developing better AI. Not for better AI, but to keep spying on corporate secrets within the USA.

[–] fubarx@lemmy.world 7 points 2 weeks ago

Someone I know just got a job offer and pasted that offer letter and his current job's offer letter into ChatGPT to compare.

That cow may well have left the barn.

[–] TommySoda@lemmy.world 6 points 2 weeks ago

So what you are saying is that if I went to ChatGPT and asked super nice what [blank] company secrets were it would know the answer? There would be no way to fact check it, but I do find that incredibly funny.

[–] ramenshaman@lemmy.world 2 points 2 weeks ago

Might want to try Lumo