With local models and inference like llama.cpp, I wish the modder rather spent his energy with models that are locally run, and possibly even fine-tuned to the in-game world. Instead, this mod requires a metered API that needs billing and always-on network connection, while just serving a generic language model with little in-game knowledge.
this post was submitted on 13 Jul 2023
1 points (100.0% liked)
Cyberpunk 2077
654 readers
1 users here now
Universal community link
!cyberpunk2077@lemmy.ml
Official Website
Purchase
Communities
Allowed languages
- Undetermined
- English
founded 3 years ago
MODERATORS