Even accounting for inflation arcades should be cheaper.
The compute hardware costs much less and is much more power efficient.
Other power hungry features like lights and displays are both cheaper and more power efficient.
The argument that they still need to be expensive makes so little sense, other than the physical space they occupy.
Don't forget the fundamental scaling properties of llms, that openai even used as the basis for strategy to make chat gpt 3.5.
But basically llm performance is logarithmic. It's easier to get rapid improvements early on. But at later points like we are now require exponentially more compute, training data, and model sizes to get now small level of improvements.
Even if we get a 10x in compute, model size, and training data (which is fundamentally finite), the improvements aren't going to be groundbreaking or solve any of the inherent limitations of the technology.