I have it (33b) running pretty well, gptq in oobabooga, rtx 3090 ti, 64GB of RAM, exllama v2 hf loader, standard alpaca template without modified system prompt. I also have the same ''''''''' with awq version. Please share the version of gptq that you have (group size, act order). I will post exact settings I use in an hour. I don't know how the version I have locally compares to the hosted version, but it's pretty good. There is a simple possibility that gptq quant is destroying model's capability and I am not noticing it but you do.
I know it's a stupid thing, but make sure you actually chose the instruct mode in the chat window itself, I didn't notice those options at first and got weird results with some models, since I wasn't using the right prompt (default one was applying, not alpaca)