this post was submitted on 17 Jul 2023
16 points (94.4% liked)

Selfhosted

39281 readers
307 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I’m working on setting up my first homelab. I have an older dell optiplex with a duel PCIe NIC in it. I was wondering if I could setup OPNsense as a docker container or virtual machine so that I could also use the extra resources of the box for other things besides just being a router. Is this a good idea?

top 12 comments
sorted by: hot top controversial new old
[–] tvcvt@lemmy.ml 7 points 1 year ago

Hey, as others have said, you can definitely set up OPNSense in a VM and it works great. I wanted to take a second and answer the first part of your question: it cannot run in Docker. Containers in Docker share their kernel with the Linux host machine. Since OPNSense isn’t a Linux distribution (it’s based on FreeBSD), it can’t make use of the shared Linux kernel.

[–] bear@slrpnk.net 6 points 1 year ago (1 children)

Yeah, this is perfectly doable. I ran a very similar setup for a while. I'd recommend passing one of the NICs directly through to the VM and using one for the host to keep it simple, but you can also virtualize the networking if you need something more complex. If you do pass through a single NIC, you'll need a switch capable of handling VLANs and a bit of knowledge on how to set up what's called a "router on a stick" with everything trunked over one connection and only separated by VLANs.

Keep in mind, while this is a great way to save resources, it also means these systems are sharing resources. If you need to reboot, you're taking everything down. If you have other users, that might be annoying for everyone involved.

[–] wiggles@programming.dev 1 points 1 year ago (1 children)

I have a managed switch. I’m a little confused how everything would be hooked up if I’m using a vm for pfsense and another vm for some Linux distro. I want the router and that distro to be isolated from my other vlans. Could I use the onboard nic hooked up to the switch to put the distro on its own vlan?

[–] bear@slrpnk.net 1 points 1 year ago* (last edited 1 year ago)

You can absolutely attach each VM and even the host to separate NICs which each connect back to the switch and has its own VLAN. You can also attach everything to one NIC and just use a virtual bridge(s) on the host to connect everything. Or any combination therein. You have complete freedom on how you want to do it to suit your needs. How this is done depends on what you're using on the host for a hypervisor though, so I can't give you exact directions.

One thing I should have thought of before; if two NICs are on one single PCI card, you probably can't pass them through to the VM independent of one another. So that would limit you to doing virtual networking if you want to split them.

[–] MangoPenguin@lemmy.blahaj.zone 4 points 1 year ago (1 children)

You can do it as a VM.

The only downside is you lose internet while rebooting your server, may not be a big deal though.

[–] teutoburg1@lemmy.ml 1 points 1 year ago

I have opnsense virtualized on a proxmox server with a couple of things that should hardly ever need restarts. It actually works pretty well because the host almost never needs a reboot and rebooting a vm is way faster than bare metal

[–] Arrayrepairman@lemmy.world 3 points 1 year ago (1 children)

I have PF sense virtualized with no issues.

[–] Arrayrepairman@lemmy.world 2 points 1 year ago

A bit more about mine now that I have a little more time, it's a VM on vmWare, it has two virtual interfaces, on on my DMZ vlan, and the other is a trunk with the rest of my vlans. With the *sense, I have 2 phisical I terfaces, and then virtual interfaces that correspond to the VLANs. My router is plugged into my switch on an access port for the DMZ, and the ESXi hosts are connected to the switch with VLAN trunks. This allows me to migrate the router to another host for reboots.

[–] jflesch@lemmy.kwain.net 2 points 1 year ago* (last edited 1 year ago)

I use OPNSense virtualized on top of Proxmox. Each physical interface of the host system (ethX and friends) is in its own bridge (vmbrX), and for each bridge, the OpenSense VM also has a virtual interface that is part of the bridge. It has worked flawlessly for months now.

[–] Legarth@lemmy.fmhy.ml 1 points 1 year ago

I'm doing it as VM running in truenas, it works perfect. The LAN nic is shared between host and OpnSense and the wan is passed through to the VM as hardware.

It's much better than my USG4 pro, so that is next to the server turned off

[–] SirNuke@kbin.social 1 points 1 year ago

Only issue I had with a similar setup is turns out the old HP desktop I bought didn't support VT-d on the chipset, only on the CPU. Had do some crazy hacks to get it to forward a 10gbe NIC plugged into the x16 slot.

Then I discovered the NIC I had was just old enough (ConnectX-3) that getting it to properly forward was finicky, so I had to buy a much more expensive ConnectX-4. My next task is to see if I can give it a virtual NIC, have OPNsense only listen to web requests on that interface, and use the host's Nginx reverse proxy container for SSL.

Yes, you can. You need a hypervisor that is capable of IOMMU. I know for a fact that you can do it with libvirtd and KVM/qemu. I think you can do it with Proxmox. That much said, I've no experience doing this myself.

load more comments
view more: next ›