this post was submitted on 02 Jan 2024
28 points (96.7% liked)

homeassistant

12025 readers
123 users here now

Home Assistant is open source home automation that puts local control and privacy first. Powered by a worldwide community of tinkerers and DIY enthusiasts. Perfect to run on a Raspberry Pi or a local server. Available for free at home-assistant.io

founded 1 year ago
MODERATORS
 

I'm currently running HA on a Pi3... it works fine, but it's now a single point of failure.

I have some new hardware arriving to run VMs in and was intending to move HA to it, but now I'm wondering if I can have HA in 2 places for fault tolerance.

I'm aware that there's no built-in failover options, but has anyone done something similar?

top 17 comments
sorted by: hot top controversial new old
[–] ANIMATEK@lemmy.world 10 points 10 months ago (1 children)

You may wanna look at Kubernetes. It’s basically docker with failover.

[–] scrubbles@poptalk.scrubbles.tech 7 points 10 months ago (1 children)

Correct, OPs needs is describing what kubernetes was made for. Fault tolerant container orchestration. Or any other orchestration framework.

However it's a best to learn and get set up. Migrating all of my containers over took a couple of months of learning and trial and error. Each person has to decide is that level of effort worth it in a home application

[–] Cyber 2 points 10 months ago* (last edited 10 months ago) (1 children)

Ok, but that's assuming >1 host can be managed that way... can I manage HA on the Pi3 as a backup to my new host with Kubernetes?

Edit: can Proxmox do this too?

[–] scrubbles@poptalk.scrubbles.tech 2 points 10 months ago (1 children)

You'll need to learn a lot more about kubernetes to decide fully if you want to do it. I'm more or less telling you that yes there are ways to keep it highly available, but they're going to be literally 10x if not more the amount of effort to spin up and probably maintain.

Proxmox has their own flavor of HA that is a lower level of virtualization. They'll be able to failover a specific VM/CT to another node if one fails, but again pros and cons. The major annoyance for both is where do you put your data so 2 separate nodes can access it? Both k8s and proxmox have different approaches.

[–] c10l@lemmy.world 1 points 10 months ago (1 children)

K8s and Proxmox operate at different levels. You can run k8s on Proxmox, and that’s what I’ve been (very slowly) building up to at home.

With Proxmox you can failover VMs between nodes as long as storage (including VM boot disk) is external to the nodes. This can be NFS on a NAS, iSCSI, Ceph or many other options.

It’s even possible to failover a USB device (e.g. a Zigbee controller or similar) by attaching one on each node and mapping them using Resource Mappings (search on the announcement post: https://www.proxmox.com/en/about/press-releases/proxmox-virtual-environment-8-0).

This can also be used if you’re deploying k8s on top of Proxmox just as well.

[–] Cyber 1 points 10 months ago

Ah, ok, thanks... I'll have to dig in to this some more

[–] markr@lemmy.world 5 points 10 months ago (1 children)

I run HA as a container in a vm. I back HA data up nightly and the compose script for running HA is archived on github. If the vm dies there is another vm that can bring it back up. If the host dies (I have a pool of xenserver (xcp-ng) hosts, so it would be a major domestic disaster if they all croaked) I have a fallback to run HA on docker on wsl. If the house burns down all the scripts are on GitHub and the backups get sent to Azure monthly. I think I’m covered.

[–] Cyber 1 points 10 months ago (1 children)

Ok, yep, if the house burns down (been there, done that), HA is priority 0.

But good point about the offsite backup + compose, I hadn't considered that - thanks.

Interesting that you're using a container inside a VM... is that just because you're using a VM-only hypervisor (ie Xen) or was there another reason?

I've heard good things about Proxmox, but no idea if it has a container / VM watchdog function.

[–] markr@lemmy.world 1 points 10 months ago

Yeah is type 1. But it pools supports network storage and is free, and I know how to use it.

[–] rompe@lemm.ee 2 points 10 months ago (1 children)

If you succeed, how do you plan to handle ZigBee or Zwave connections? I'm a bit unhappy with my ZigBee dongle remaining a single point of failure.

[–] Cyber 1 points 10 months ago (1 children)

I don't have any zigbee devices at the moment, but I was looking into network based ones... not sure if I can have 2 of those? (Again, no zigbee expirence yet to know the options)

Best Zigbee Coordinators for Home Assistant 2023

[–] chunkystyles@sopuli.xyz 2 points 10 months ago (1 children)

From everything I've seen, the networked ones are never recommended over USB dongles.

[–] Cyber 2 points 10 months ago (1 children)

Oh, interesting. From a performance point of view, or reliability?

[–] chunkystyles@sopuli.xyz 1 points 10 months ago (1 children)

I don't know personally. But I'd assume it would be from ease of use and reliability.

You could probably get something close to a networked zigbee dongle by running zigbee2mqtt on a pi with a USB dongle and run nothing else on it. It would potentially make restoring it in a failure easier.

[–] Cyber 2 points 10 months ago

Hmm... good point. I've even got an original Pi kicking around somewhere that I could use... Thanks

[–] chris@l.roofo.cc 1 points 10 months ago (1 children)

HA might be possible in a active passive configuration if you don't have any dependencies on external hardware like a zigbee stick. Active active would need support by HA and I don't think that is implemented.

I think the most secure thing is to keep regular backups so you can roll back easily.

[–] Cyber 1 points 10 months ago

Thanks, yes, I think active-active would be another magnitude harder... and would need database, history, etc on shared storage... over the top to jist ensure the lights stay on.

And backups are essential for all use cases (and not just the built-in HA backup left on the device / VM / container that just failed!)

Thanks