I’ve said it before and I’ll say it again.

Self-censorship is the worst kind because you’re not even initially trying to get the original message out. You’re doing the advertiser’s work for them when you make your posts friendly to the algorithm.
Community Rules
You must post before you leave
Be nice. Assume others have good intent (within reason).
Block or ignore posts, comments, and users that irritate you in some way rather than engaging. Report if they are actually breaking community rules.
Use content warnings and/or mark as NSFW when appropriate. Most posts with content warnings likely need to be marked NSFW.
Most 196 posts are memes, shitposts, cute images, or even just recent things that happened, etc. There is no real theme, but try to avoid posts that are very inflammatory, offensive, very low quality, or very "off topic".
Bigotry is not allowed, this includes (but is not limited to): Homophobia, Transphobia, Racism, Sexism, Abelism, Classism, or discrimination based on things like Ethnicity, Nationality, Language, or Religion.
Avoid shilling for corporations, posting advertisements, or promoting exploitation of workers.
Proselytization, support, or defense of authoritarianism is not welcome. This includes but is not limited to: imperialism, nationalism, genocide denial, ethnic or racial supremacy, fascism, Nazism, Marxism-Leninism, Maoism, etc.
Avoid AI generated content.
Avoid misinformation.
Avoid incomprehensible posts.
No threats or personal attacks.
No spam.
Moderator Guidelines
I’ve said it before and I’ll say it again.

Self-censorship is the worst kind because you’re not even initially trying to get the original message out. You’re doing the advertiser’s work for them when you make your posts friendly to the algorithm.
I work really hard to make my output worthless, in all aspects of life. It's galling to even fail at that.
His name is Adam Aleksic and his book (Algospeak) was REALLY good.
It’s vector databases all the way down
databases are just lists of lists
Ok so how do you actually evade algorithmic censorship then? I assume the embedding still is based on transcribed text, so word choice should matter on some level even if it isn't the be all end all right? The metric of other content preferences of people who prefer your content does seem harder to get around though.
Anyway here is the link to the paper the video mentions: https://concetticontrastivi.org/wp-content/uploads/2023/01/1369118x.2016.1154086.pdf
Nope. There are studies with vector databases that show that even language doesn’t matter, the words start grouping together automatically based on relevance just by the way the math works.
In theory your could try inventing a fake language so weird that it doesn’t match anything existing, but at that point just start encrypting your stuff
the words start grouping together automatically based on relevance just by the way the math works
Sure but isn't it still the words that are grouping together? The guy in the OP video seems to be claiming that the fact that he used certain words does not matter, which does not make sense to me, since the depth of understanding these algorithms have of what is being said is still somewhat shallow.
I would guess that it should be possible to engineer a sentence that communicates a particular message, but is phrased in such a way that it targets a location in vector space that is not associated with that message (until the other parts of their system make that association).
If you can give ChatGPT the transcript and it can say "yes that's about ____", then that means it's certainly possible for them to do the same. I would expect that anything trained specifically for that should only get better from there, although obviously they're not going to throw ChatGPT-sized compute at it.
although obviously they’re not going to throw ChatGPT-sized compute at it.
I'm not entirely sure whether and what more fundamental distinctions between embeddings and LLMs may exist, but smaller LLMs really struggle with comprehension if things are phrased in an unexpected way, and embeddings use comparatively very few resources. Maybe a circumvention training tool could work like this: a writing game where the goal is to produce text about a topic such that the embedding fails to associate it with that topic, but a more powerful LLM succeeds (the idea being that maybe a human would be able to tell also). The biggest advantage these systems have is probably just the way people do not get direct feedback about how their work is being interpreted.
It'd be great if people just stopped submitting to these algorithms. Every time I hear someone on a podcast talking about "their" algorithm, as if it were some benign, cutesy thing, I want to puke. Just quit the corporate bullshit and come absorb depressing content on Lemmy like the rest of us, jeez.
pov: when you want to be different…