...we're talking about a ban of links to Twitter on a gaming subreddit. Those links would be to, like, game news. That's not "fascist content".
lukewarm_ozone
Oh, that's really cool. I hope there's more linkage between the twitter-like and reddit-like islands of the fediverse in the future; I'm somewhat interested in reading the former but it seems to be complicated to actually get federation with it.
It's a Lemmy feature. Every instance can have a list of slurs that's automatically removed from all messages. You can see the instance's slur regex in the /site api endpoint, key slur_filter_regex
. Lemmy.ca's filter bans the word "retarded" among some other things.
I've looked at a few other instances and they're all interestingly different. Someone should do data science on this. E.g., I've yet to find an instance that uses it to automatically censor ideologically opposing sites, which is better than I expected, but it's almost certain that some instances do.
Sure, in Firefox itself it wasn't a severe vulnerability. It's way worse on standalone PDF readers, though:
In applications that embed PDF.js, the impact is potentially even worse. If no mitigations are in place (see below), this essentially gives an attacker an XSS primitive on the domain which includes the PDF viewer. Depending on the application this can lead to data leaks, malicious actions being performed in the name of a victim, or even a full account take-over. On Electron apps that do not properly sandbox JavaScript code, this vulnerability even leads to native code execution (!). We found this to be the case for at least one popular Electron app.
Huh? What do you mean "if"? Such a PDF vulnerability literally did happen a few months ago; fixed in Firefox v.126: https://codeanlabs.com/blog/research/cve-2024-4367-arbitrary-js-execution-in-pdf-js/.
There’s no real need for pirate ai when better free alternatives exist.
There's plenty of open-source models, but they very much aren't better, I'm afraid to say. Even if you have a powerful workstation GPU and can afford to run the serious 70B opensource models at low quantization, you'll still get results significantly worse than the cutting-edge cloud models. Both because the most advanced models are proprietary, and because they are big and would require hundreds of gigabytes of VRAM to run, which you can trivially rent from a cloud service but can't easily get in your own PC.
The same goes for image generation - compare results from proprietary services like midjourney to the ones you can get with local models like SD3.5. I've seen some clever hacks in image generation workflows - for example, using image segmentation to detect a generated image's face and hands and then a secondary model to do a second pass over these regions to make sure they are fine. But AFAIK, these are hacks that modern proprietary models don't need, because they have gotten over those problems and just do faces and hands correctly the first time.
This isn't to say that running transformers locally is always a bad idea; you can get great results this way - but people saying it's better than the nonfree ones is mostly cope.
Incredibly weird that this thread was up for two days without anyone posting a link to the actual answer to OP's question, which is g4f.
Difficulty is hardly the point of the post.
I haven't, actually, since I normally use an adblocker (and also don't use that tracker). Looks like they're all VPN advertisements right now, which is at least a somewhat non-mainstream ad segment.
It very nearly did, but there's, like, 2 working instances with heavy ratelimits.