As someone working on LLM-based stuff, this is a terrible idea with current models and techniques unless they have a dedicated team of human editors to make sure the AI doesn't go off the rails, to say nothing of the cruelty of firing people to save maybe a few hundred thousand dollars with a substantial drop in quality. They can be very smart with proper prompting, but are also inconsistent and require a lot of handholding for anything requiring executive function or deliberation (like... writing an article meant to make a point). It might be possible with current models, but the field is way too new and techniques too crude to make this work without a few million dollars in R&D, at which point it'll probably be completely wasted when new developments come out nearly every week anyway.
Also wait, wtf are they going to do for game reviews? RL can barely complete Minecraft (which is an astonishing development, but it's so bleeding edge it might just cut to the bone). Even if they got some ultra-high-tech multimodal multi-model AI to play a game and review it, it would need to be an artificial person (AGI + autonomy) to even approximate human sensibilities and preferences.