conderoga

joined 1 year ago
[โ€“] conderoga@beehaw.org 2 points 1 year ago (2 children)

LLM generated text can also be easily detected provided you can figure out which model it came from and the weights within it. For people training models, this won't be hard to do.

I agree with the take that getting better and better datasets for training is going to get easier over time, rather than harder. The story of AlphaZero is a good example of this too - the best chess AI quickly trounced any AI trained on human games simply by playing against itself. To me, that suggests that training on LLM output will lead to even better results, since you can generate so much more of it.

[โ€“] conderoga@beehaw.org 1 points 1 year ago

I think this take makes the most sense. It seems like the totally free and open lemmy instances will do their best to re-create the Reddit that they came from. Other communities will aim for something more tight-knit (not unlike Discord servers). Both can co-exist, but it is hard to imagine the tight-knit ones taking much advantage of the federation features.