For better readability, OP's original text rewritten by ChatGPT:
This is the third time I am trying to rewrite this introduction on this thread. While I am not sure why I need to do this, I feel it is important to make a good introduction. So, hello everyone, and good night from here. I hope you enjoy your holiday or Thanksgiving.
Uncensored Local AI models are loved by everyone and championed like a golden kid in some cases (Mistral, for example) because they're just good compared to their aligned versions. So, thinking that uncensored models are fully uncensored is completely biased and not true.
Some models are creations of various models that are merged to create a new model that is not aligned but retains its capability and value, such as Mistral-Orca 7b. Others are just being fine-tuned with a dataset that is uncensored, but the base model is still not uncensored but aligned, making it still generate output advising or lecturing the user instead of giving a straightforward answer.
A model can be called truly uncensored when it does not give the user advice and faithfully generates the user's desired output.
Those are my thoughts for this time. I hope everyone enjoys their Thanksgiving