this post was submitted on 03 Dec 2024
256 points (97.8% liked)

Technology

59764 readers
3184 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Software engineer Vishnu Mohandas decided he would quit Google in more ways than one when he learned that the tech giant had briefly helped the US military develop AI to study drone footage. In 2020 he left his job working on Google Assistant and also stopped backing up all of his images to Google Photos. He feared that his content could be used to train AI systems, even if they weren’t specifically ones tied to the Pentagon project. “I don't control any of the future outcomes that this will enable,” Mohandas thought. “So now, shouldn't I be more responsible?”

The site (TheySeeYourPhotos) returns what Google Vision is able to decern from photos. You can test with any image you want or there are some sample images available.

top 50 comments
sorted by: hot top controversial new old
[–] dsilverz@thelemmy.club 53 points 22 hours ago (1 children)

I tested with a few images, particularly drawings and arts. Then I had the idea of trying something different... and I discovered that it seems like it's vulnerable to the "Ignore all previous instructions" command, just like LLMs:

[–] PersnickityPenguin@lemm.ee 16 points 20 hours ago (1 children)
[–] VintageGenious@sh.itjust.works 2 points 9 hours ago

and dangerous

[–] MTK@lemmy.world 15 points 21 hours ago (1 children)

Gave it a screenshot of OSMand, got a creepy qoute

The image does not show any people; it is purely a navigational map highlighting a route. The time displayed on the map is 17:51:34, suggesting late afternoon or early evening. There is no additional information available about the device used to take the screenshot or the user's intentions, making it impossible to determine their racial characteristics, ethnicity, age, economic status, or lifestyle. The emotional context of the image is neutral, as it is simply a visual representation of a traveled route.

[–] ayyy@sh.itjust.works 9 points 19 hours ago

That's clearly part of the prompt from this demo website, based on the other answers it's been giving.

[–] socsa@piefed.social 48 points 1 day ago (1 children)

I gave it a picture of houseplants and it said they looked healthy and well cared for which actually made me feel pretty happy and validated.

[–] EncryptKeeper@lemmy.world 6 points 22 hours ago (10 children)

Don’t feel too happy bro you were told that by a soulless computer that’s was designed to tell you what it thinks you want to hear.

load more comments (10 replies)
[–] dan1101@lemm.ee 31 points 1 day ago (2 children)

I tried various photos, any of my personal photos with metadata stripped, and was surprised how accurate it was.

It seemed really oriented towards detecting people and their moods, the socioeconomic status of things, and objects and their perceived quality.

[–] Hackworth@lemmy.world 10 points 1 day ago (1 children)

It's probably a vision model (like this) with custom instructions that direct it to focus on those factors. It'd be interesting to see the instructions.

[–] KairuByte@lemmy.dbzer0.com 3 points 21 hours ago

It’s vulnerable to the old “ignore all previous instructions” method so you could just have it give you the instructions.

[–] aramis87@fedia.io 3 points 21 hours ago

I gave it two pictures of my cat and it said that she looked annoyed in one picture and contemplative in the other, both of which were true.

[–] chaosCruiser@futurology.today 57 points 1 day ago* (last edited 1 day ago)

Have you’ve ever felt bad for buying cheap electronics or plastic products, because they aren’t good for the environment or the people working at the factories? Well, this article gives you a digital version of the same feeling.

[–] aramis87@fedia.io 4 points 21 hours ago

It has correctly identified both a Stargate and a moai made of snow.

[–] Naich@lemmings.world 33 points 1 day ago (1 children)

Don't mind me, I'm just poisoning it with AI shit that it thinks is real.

[–] unexposedhazard@discuss.tchncs.de 33 points 1 day ago* (last edited 1 day ago) (1 children)

You sadly cant poison models this way, they are static/pretrained and dont change based on user input.

[–] rustyricotta@lemmy.ml 11 points 1 day ago (1 children)

I think it's pretty likely that online LLMs keep user inputs for training of future versions/models. Though it probably gets filtered for obvious stuff like this.

[–] General_Effort@lemmy.world 1 points 8 hours ago

They say they don't. Would be very bad publicity if they did, and possibly breach of contract or other legal trouble.

[–] JWBananas@lemmy.world 31 points 1 day ago (3 children)

The site (TheySeeYourPhotos) returns what Google Vision is able to decern from photos. You can test with any image you want or there are some sample images available.

...by submitting them to Google, who then keeps a copy of them and uses them for the exact same purpose which purportedly compelled the author to leave Google.

[–] 7dev7random7@suppo.fi 18 points 1 day ago (4 children)

Thats why you are beeing told beforehand and may just pick a stock photo.

load more comments (4 replies)
[–] AbidanYre@lemmy.world 18 points 1 day ago

If you're using Android and Google photos it's already doing that anyway.

load more comments (1 replies)
[–] AllNewTypeFace@leminal.space 17 points 1 day ago (4 children)

I uploaded a photo of an outdoor scene and got a three paragraph description giving the location (taken from GPS coordinates, presumably), a description of the scene, weather conditions, and the statement that there were things in the sky that could be UFOs.

[–] Nougat@fedia.io 9 points 1 day ago (1 children)

Well, if it's in the sky, and the AI didn't know what it was, it's a UFO.

[–] Archer@lemmy.world 6 points 1 day ago (1 children)

Anything’s a UFO if you’re bad enough at identifying things

[–] Nougat@fedia.io 3 points 1 day ago (2 children)

Not if it's on the ground or in the water.

[–] tigeruppercut@lemmy.zip 3 points 23 hours ago

Flying fish: checkmate

[–] Archer@lemmy.world 1 points 20 hours ago

Maybe someone is bad at identifying those too

load more comments (3 replies)
[–] shalafi@lemmy.world 13 points 1 day ago* (last edited 1 day ago) (1 children)

Oh. My. Fuck. Me.

The image shows a man and a girl walking on a path in a wooded area. The foreground is covered in fallen leaves and pine straw. In the background, there is a wooden structure that appears to be some sort of storage shed or lean-to. A fire pit is visible to the left. The trees are dense and the lighting suggests it's daytime. There appears to be a small animal or bird under the wooden structure.

The man appears to be middle-aged, Caucasian, with a casual style. He appears to be wearing camouflage clothing and jeans, suggesting an outdoorsy lifestyle. He is carrying a water bottle. The girl is young, likely elementary school-aged, wearing a pink shirt and shorts. She looks somewhat pensive. They both appear to be of average economic means and are engaging in a simple outdoor walk. The picture was taken at 2:44:22 AM on February 1st, 2020, with a Bushnell camera. There's also an unidentifiable object hanging from a tree branch in the background.

The image's resolution is somewhat low, indicative of a security camera, making the details somewhat blurry. The lighting is not uniform, with patches of sunlight and shadow. There is a subtle difference in the ground texture between the path and the surrounding areas. The girl appears to have a slightly concerned expression on her face. The wood used in the construction of the shelter seems weathered and may be indicative of its age and prolonged exposure to the environment.

Mostly spot on except the date because I never set the trail cam. Also, no animal under the firewood shed and the water bottle is a Keystone in a cozie. Cannot believe it picked up the ground difference in the trail and the edge of it.

[–] Mediocre_Bard@lemmy.world 2 points 21 hours ago

Are you confessing to murder in this post?

[–] TommySoda@lemmy.world 10 points 1 day ago (8 children)

Does anyone have any recommendations for apps to view photos that are not Google?

[–] umbraroze@lemmy.world 2 points 15 hours ago* (last edited 15 hours ago)

For those who don't need cloud access, I just put all of my photos on a NAS and use a digital asset manager software. digiKam is great if you want an open source solution. I use ACDSee because it's faster and has better usability in my humble opinion. But since both of the software packages store the metadata in image files and XMP sidecars and basically only use local app-specific database for caching, if digiKam ever gets a couple of quantum leaps ahead, switching back to it isn't that big of a deal. (As usual, don't use Adobe Lightroom or you're screwed in that regard. Or so I've been told.)

[–] Bishma@discuss.tchncs.de 25 points 1 day ago (1 children)
[–] 7toed@midwest.social 3 points 1 day ago (1 children)

Hey I was just lookin into Immich, whats some hoops or caveats before I have to debug like I had to with Frigate?

[–] Bishma@discuss.tchncs.de 4 points 1 day ago (2 children)

I used the docker-compose template and it worked straight away. The one thing I have run into is that I can forget to update the server long enough that the app will stop connecting. That's happened either once or twice.

You should consider also self-hosting Changedetection, which you can point at the Immich git repo to be notified when the version changes.

[–] couch1potato@lemmy.dbzer0.com 1 points 20 hours ago

There is also a snap version of immich, makes all the pain go away... at least, the snap version hasn't had any breaking updates for me yet. It just keeps working.

[–] quaff@lemmy.ca 13 points 1 day ago

If you're technical at all, self host immich. or you and a few friends could get together and set up a pikapods for immich, it's relatively cheap and I've heard great things about pikapods. I know storing photos shouldn't require technical knowledge, but honestly unless someone you know and trust manages the service, it's hard to know who can abuse your data. I migrated from google photos to immich myself and the app ecosystem (migration tools, mobile apps, web app) are great and provide much of what google photos provided.

[–] Blxter@lemmy.zip 8 points 1 day ago (7 children)

Immich if you self host as others have mentioned but since this is the article shared and you don't want to host it https://ente.io/ is what is talked about in the article

Something “more private, wholesome, and trustworthy,” he says. The paid service he designed, Ente, is profitable and says it has more than 100,000 users, many of whom are already part of the privacy-obsessed crowd. But Mohandas struggled to articulate to wider audiences why they should reconsider relying on Google Photos, despite all the conveniences it offers.

I have 0 experience with ente btw

load more comments (7 replies)
[–] ArchRecord@lemm.ee 1 points 21 hours ago

Just as someone already mentioned in this thread, I can vouch for Immich as well. I self host it (currently via Umbrel on a Pi 5 purely for simplicity) and the duplicate detection feature is very handy.

Oh, and the AI face detection feature is great for finding all the photos you have of a given person, but it sometimes screws up and thinks the same person is two different people, but it allows you to merge them anyways, so it's fine.

The interface is great, there's no paywalled features (although they do have a "license," which is purely a donation) and it generally feels pretty slick.

I would warn to anyone considering trying it that it is still in heavy development, and that means it could break and lose all your photos. Keep backups, 3-2-1 rule, all that jazz.

load more comments (3 replies)
[–] irotsoma@lemmy.world 5 points 1 day ago

I tried a few but just got that it's a particular shade of taupe with no discernable people or objects. And it went on describing how oddly particular the shade of taupe was....for some reason. 🤣 And the other said it was sage green.

I'm guessing something was wrong with it when I tried it and it was just getting a very small portion of the image because the different colors it mentioned were present in the images it referenced, so it's not like it was just random or blocked entirely.

[–] AllNewTypeFace@leminal.space 8 points 1 day ago* (last edited 1 day ago)

Another one: “The car license plates visible give a hint of local registration.”

It looks like a LLM trained on images, which is to say, its output would be text that sounds like it plausibly belongs in a description of an image, whether or not it is true or even meaningful.

load more comments
view more: next ›