14specks

joined 3 years ago
MODERATOR OF
[–] 14specks@lemmy.ml 1 points 1 year ago

If I'm making a mistake, I don't think that would be it. I've been observing Bitcoin and its community since 2011.

[–] 14specks@lemmy.ml 27 points 1 year ago (3 children)

Key difference is that Bitcoin people want/need their numbers to go up,up,up as a measure of success.

Here, we are hoping to cultivate a healthy community (at either/both the instance and fediverse level). From my experience on various subreddits, focusing on growth is not a good way to do this.

Communities are defined more by who is not allowed in than by who is in the community. Lemmy phase 2 kicked off back in June, and it still needs some time to find its footing at a sustainable rate of growth.

[–] 14specks@lemmy.ml 1 points 1 year ago

I see, I knew that person had a huge bone to pick with the Lemmy devs over their personal politics (nearly irrelevant on a federated platform imo), so I didn't know if it was along the same lines.

[–] 14specks@lemmy.ml 1 points 1 year ago (2 children)

Is that the same person who runs the FediTips Mastodon?

[–] 14specks@lemmy.ml 2 points 1 year ago (1 children)

Would be great to have other mobile apps fill the gap, so that the devs can focus more heavily on the main site instead of Jerboa.

[–] 14specks@lemmy.ml 2 points 1 year ago (1 children)

Best place to get an answer is on the Github, since that's where development is being organized.

[–] 14specks@lemmy.ml 0 points 1 year ago (1 children)

It is possible that deletions will not propagate to other servers if they are running a forked version of the software.

[–] 14specks@lemmy.ml 4 points 1 year ago

Yeah they are really busy right now, so if you want them to get seen you'll want to merge them into what is/isn't there in the github

[–] 14specks@lemmy.ml 1 points 1 year ago (1 children)

This then also makes me wonder how these models are going to be trained in the future. What happens when for example half of the training data is the output from previous models? How do you possibly steer/align future models and prevent compounding errors and bias? Strange times ahead.

Between this and the "deep fake" tech I'm kinda hoping for a light Butlerian jihad that gets everyone to log tf off and exist in the real world, but that's kind of a hot take

[–] 14specks@lemmy.ml 2 points 1 year ago (3 children)

I think what sites have been running into is that it's difficult to tell what is and is not AI-generated, so enforcement of a ban is difficult. Some would say that it's better to have an AI-generated response out there in the open, which can then be verified and prioritized appropriately from user feedback. If there's a human generated response that's higher.quality, then that should win anyway, right? (Idk tbh)