I mostly understand the dilemma, but I want to see if someone has better success with their AI assistant.
I use the Ollama integration and set up a conversation model. However, the default Home Assistant AI knows to use the home forecast entity whenever I ask about the weather. Whether I also set up an AI task model, toggle “control Home Assistant” on or off, or toggle “perform local commands” on or off - the Ollama models do not reference the home forecast the way the default Home Assistant can. I thought maybe keeping default commands on would enable this ability while answering all other queries with the Ollama LLM. I just want a smarter AI. Any suggestions?
Nice! These are great suggestions, and I apologize for any incorrect terminology in my post. To clarify, the vanilla / default Home Assistant can get the forecast correct every time. I just want that model to take the wheel when simple commands come through and then an LLM takes the wheel when asked random questions unrelated to home automation.