Not in HomeAssistant directly, but I made a flow in n8n where an LLM model can detect I'm asking about weather and call a weather API, parse the response and answer.
homeassistant
Home Assistant is open source home automation that puts local control and privacy first.
Powered by a worldwide community of tinkerers and DIY enthusiasts.
Home Assistant can be self-installed on ProxMox, Raspberry Pi, or even purchased pre-installed: Home Assistant: Installation
Discussion of Home-Assistant adjacent topics is absolutely fine, within reason.
If you're not sure, DM @GreatAlbatross@feddit.uk
You can do something similar in Home Assistant.
Add an integration to a weather service (there might even be one out of the box).
Create an automation trigged by saying a sentence to your voice assistant.
Set the automation action to be a conversation response, and set that to whatever entity contains the part of the weather you want it to say (or a template if you want it to say multiple or other fancy things)
What I do is use externed_openai_conversation from the HACS to hook into my LLM's OpenAI-compatible API endpoint. That one makes it available via the regular Voice Assistant stuff within Home Assistant.
Not sure what's happening here. The Ollama page says it doesn't have all functionality, for example it doesn't have sentence triggers? And weather forecast is a bit of a weird one in Home Assistant. That's not an entity (unless you configure one manually) but a service call to fetch the forecast. Maybe your AI just doesn't have the forecast available, just the current condition and maybe current temperature. Everything else must be specifically requested with a deliberate "weather.get_forecast" call. Maybe that service call and the specific processing is in the official Assistant, but not in the Ollama integration?
Can't piss on me, can't tell me it's raining. What are these things even useful for?
Nice! These are great suggestions, and I apologize for any incorrect terminology in my post. To clarify, the vanilla / default Home Assistant can get the forecast correct every time. I just want that model to take the wheel when simple commands come through and then an LLM takes the wheel when asked random questions unrelated to home automation.