What you highlighted as problems are the reasons why people fork out money for the compute to run 34b and 70b models. You can tweak sampler settings and prompt templates all day long but you can only squeeze so much smarts out of a 7b - 13b parameter model.
The good news is better 7b and 13b parameter models are coming out all the time. The bad news is even with all that, you're still not going to do better than a capable 70b parameter model if you want it to follow instructions, remember what's going on, and stay consistent with the story.