this post was submitted on 28 Aug 2023
63 points (92.0% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

54577 readers
260 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder

📜 c/Piracy Wiki (Community Edition):


💰 Please help cover server costs.

Ko-Fi Liberapay
Ko-fi Liberapay

founded 1 year ago
MODERATORS
 

Mateys! We have plundered the shores of tv shows and movies as these corporations flounder in stopping us seed and spread their files without regard for the flag of copyright. We have long plundered the shores of gaming and broke DRM that have been plaguing modern games, and allowing accessibility to games in countries where a game would cost a week or even a month of wages (I was once in this situation, so I am grateful for the pirating community for letting me enjoy the golden era of games back in 2012-2015).

But there, upon the horizon, lies a larger plunder. A kraken who guards a lair of untouched gold and emeralds, ready for the taking.

Closed-source AI models.

These corporations have stolen what was once ours, our own data, and put them in their AI models so that only they can profit off of it. These corporations raze the internet with their spiders and their bots to gather as much morsel of data from us which they can feed to their shiny new toy. We might not be able to stop them from stealing our data, but we have proven ourselves to be adept at copying things, leaking software, and this is what we need to do. AI is already too dangerous and to powerful for a select few corporations to control.

As long as AI is within the hands of corporations, not people, the AI will serve their goals, not ours. This needs to change, so this is what I propose for our next voyage.

you are viewing a single comment's thread
view the rest of the comments
[–] MalReynolds@slrpnk.net 2 points 1 year ago (4 children)

Akshually, while training models requires (at the moment) massive parallelization and consequently stacks of A100s, inference can be distributed pretty well (see petals for example). A pirate 'ChatGPT' network of people sharing consumer graphics cards could probably indeed work if the data was sourced. It bears thinking about. It really does.

[–] wolfshadowheart@kbin.social 2 points 1 year ago (3 children)

You definitely can train models locally, I am doing so myself on a 3080 and we wouldn't be as many seeing public ones online if that were the case! But in terms of speed you're definitely right, it's a slow process for us.

[–] MalReynolds@slrpnk.net 1 points 1 year ago (2 children)

I was thinking more of training the base models, LLAMA(2), and more topically GPT4 etc. You're doing LoRA or augmenting with a local corpus of documents, no?

[–] wolfshadowheart@kbin.social 1 points 1 year ago (1 children)

Ah yeah my mistake I'm always mixing up language and image based AI models. Training text based models is much less feasible locally lol.

There's no model for my art so I'm creating a checkpoint model using xformers to bypass the VRAM requirement and then from there I'll be able to speed up variants of my process using LORA's but that won't be for some time, I want a good model first.

[–] MalReynolds@slrpnk.net 1 points 1 year ago

Fair cop, Godspeed!