Mirodir

joined 1 year ago
[–] Mirodir@discuss.tchncs.de 14 points 1 year ago* (last edited 1 year ago) (1 children)

I'm not really sure how to describe it other than when I read a function to determine what it does then go to the next part of the code I've already forgotten how the function transforms the data

This sounds to me like you could benefit from mentally using the information hiding principle for your functions. In other words: Outside of the function, the only thing that matters is "what goes in?" and "what comes out?". The implementation details should not be important once you're working on code outside of that function.

To achieve this, maybe you could write a short comment right at the start of every function. One to two sentences detailing only the inputs/output of that function. e.g. "Accepts an image and a color and returns a mask that shows where that color is present." if you later forget what the function does, all you need to do is read that one sentence to remember. If it's too convoluted to write in one or two sentences, your function is likely trying to achieve too much at once and could (arguably "should") be split up.

Also on a different note: Don't sell your ability to "cludge something together" short. If you ever plan to do this professionally or educationally, you will sadly inevitably run into situations where you have no choice but to deliver a quick and dirty solution over a clean and well thought out one.

Edit: typos

[–] Mirodir@discuss.tchncs.de 1 points 1 year ago

While I agree with you and also agree with the decision to not show it anymore, I do want to highlight this bit that you wrote:

instead dad physically abuses the misbehaving child and nothing is ever resolved

The positive thing is that it never (or so raraly that I wouldn't remember) presented the strangling as anything good or helpful. Instead it was always presented as a shortcoming of his personality. Homer is mentally ill equipped to solve conflicts with Bart non-violently. Strangling him was his only outlet and (at least to attentive viewers) it was clearly and evidently damaging Bart's development. This is for example demonstrated in a scene where Bart has such a trauma that he's getting "strangled" by thin air when he thinks his dad would go for it.

Also, with the knowledge that Bart is, to some extent, Matt Groening's self-insert, that does raise some rather unpleasant questions.

[–] Mirodir@discuss.tchncs.de 9 points 1 year ago* (last edited 1 year ago) (1 children)

Ignoring the fact that they were clearly talking in orders of magnitude, it was 8MB for a very long time and only recently got increased to 25.

[–] Mirodir@discuss.tchncs.de 12 points 1 year ago* (last edited 1 year ago) (1 children)

I went and skimmed the paper because I was curious too.

If my skimming is correct, what they do is similar to adversarial attacks on classifiers, where a second model learns to change as few pixels as possible to confuse a classifier into giving a wrong prediction.

Looking at the examples of dogs and cats: They find pictures of dogs where by making only minimal changes, invisible to the naked eye, they can get the autoencoder to spit out (almost) the same latent representation as an image of a cat would have. Done to enough dog-images, this will then confuse the underlying diffusion model to produce latent representations of cat images when prompted to generate a dog. Edit for clarity: Those generated latent representations would then decode into cat images.

If my thinking doesn't fail me, this attack could easily be thwarted by unfreezing the pretrained autoencoder. In the paper that introduced latent diffusion they write that such approaches already exist. If "Nightshade" takes off, I'm sure those approaches would be refined and used. Even just finetuning the autoencoder for a few epochs first should be enough to move the latent representations of the poisoned dog images and those of the cat images they're meant to resemble far enough apart to make the attack meaningless.

Edit: I also wonder how robust this attack is against just adding an imperceptible amount of noise to the poisoned images.

[–] Mirodir@discuss.tchncs.de 18 points 1 year ago (1 children)

I'll save you some time and give you the definition by merriam webster:

any of various crimes (such as assault or defacement of property) when motivated by hostility to the victim as a member of a group (such as one based on color, creed, gender, or sexual orientation)

It's murder (a crime) with racism as the main motivator.

[–] Mirodir@discuss.tchncs.de 6 points 1 year ago

BlockTube

By now it allows to block (remove) other things: Auto-generated playlists, Explore page, Shorts, Movies and a few other things.

I mostly use it to block content about stuff I don't want to get spoiled on. It supports regex so it's fairly easy for me to very rarely see anything that would diminish my experience from a specific piece of media (assuming I don't forget to do it in the first place...).

[–] Mirodir@discuss.tchncs.de 3 points 1 year ago

You might not be able to stop an AI directly because of the reasons you listed. However, OpenAI is probably at least competent enough to not send the response directly to the AI but instead have a separate (non-AI) mechanism that simply doesn't let the AI access the response of websites with a certain line in the robots.txt.

view more: ‹ prev next ›