Yeah this is what I keep trying to explain to people. I think AI ethics to be important, but the major wave of apocalyptic AI fear mongering just comes across as regulatory capture. The ethical response is to expand access and get more backgrounds involved, not to leave it up to people in power.
gunshoes
joined 1 year ago
It would more than likely lose the ability to "make funny." The problem is called "catastrophic forgetting" and it comes up when pretrained models are fine tuned on downstream tasks. There's some literature that shows the original pretraining induces some bias (English BERT models fine-tuned on multilingual sets retain English syntax patterns) but more often than not the model purges ability to perform it's original task.
Most ML work is oriented towards industry, which in turns oriented towards customers. Most users don't understand p values. But they do understand one number being bigger than another. Add on that checking p values would probably remove 75% of research publications, there's just no incentive.
Genuine question: why? Everyone is optimistic about AI. The market loves it and people already in industry are enjoying the new funding for opportunities. The doomerism is just a fringe group that may be using it as a marketing gimmick (cough open cough ai). Why do you see the need to cater to specifically optimism?