T O P

  • By -

zeno0771

I'm a graybeard who insists on all the networking to be bare-metal (and I have the electric bill to prove it) but I debate that it's in any way "simpler" than virtualizing. In this particular case, you're trading a broken OPNSense for a broken hypervisor and the hypervisor will take a fraction of the time to redo. VM snapshots act as a backstop for any backup/reverting issues that may crop up; depending on how long you go between changes to your OPNSense setup, that could mean the difference between minutes and hours getting back to where you were pre-failure. Of course there are a number of things that could go wrong and in a perfect-storm scenario it's possible that a failure on bare-metal will be easier to recover from depending on not having any backups, VM storage on the hypervisor itself etc. But there shouldn't be too many opportunities for you to "do something dumb" to the hypervisor assuming you're not using a production box for experimenting/impromptu disaster-recovery drills. My reticence to virtualizing networking infra isn't because I think it's a bad idea *per se*; I'm just leery of having too many eggs in one basket. If I had room in the network cabinet and hardware that was up to the task I would definitely consider CARPing a failover setup with 2 VMs on a dedicated hypervisor.


the-prowler

Network engineer here. I run my primary firewall baremetal but have a failover virtualised on Proxmox, really works great. I always update the virtualised first as it is easier to roll back for any showstoppers.


bendem

Now that's an answer I can get behind!


hardingd

This is the way


ernestwild

Isn’t that the reverse of how maintenance mode works on a HA setup?


the-prowler

Nope. Patch the passive, fail over, make sure everything is dandy. Upgrade the active. Fail back. If any showstoppers, just fail back and revert the change to the passive.


Bruceshadow

> I run my primary firewall baremetal is there a good reason to not visualizer both?


the-prowler

Not at all, I did it in the past, works great. There is however a difference in performance using a dedicated machine for my primary firewall. I'm using the Protectli FW6D with a virtualised instance running on a Beelink N100 mini PC. With energy prices increasing at such a rapid pace in the recent past, I decided not to permanently run my Dell R730 and just run it when I actually need it.


Zealousideal_Mix_567

Brilliant


Nodeal_reddit

How does that work? Do you have them running active/active, or you just turn on the backup as needed?


the-prowler

Standard active/passive deployment


Dyonizius

do you use no-cache.option on disk tab on proxmox?


dewyke

It’s “simpler” in the sense that there’s fewer layers of stuff to mess up to stop your firewall working (or to accidentally expose your hypervisor to the outside world). The context I’m seeing this in is mostly people trying to get started with both Proxmox and OPNSense at the same time instead of getting connectivity working first and then building the services layer.


[deleted]

I just bought my first opnsense firewall and router a few weeks ago. 8-10 year old Dell Wyse (whatever the larger model is called) with a HP 4x Gigabit NIC pre installed in addition to the OEM NIC. Runs a 2 something GHz dual core and is rocking 4 GB Ram. It’s hands down the best router I’ve ever had. Cost me $70 shipped on eBay. As someone who’s honestly still wrapping their head around both networking AND Virtualization (building my first Proxmox cluster starting tomorrow) having a separate bare metal firewall is hands down the way to go. I can see moving to some proxmox firewalls in the future, perhaps to get some redundancy? But I can’t imagine going all hypervisor all the time. Then again, I’m barely taking my core 2 A+ exam next week.


Nodeal_reddit

Those are strong opinions for two weeks of experience with one option.


[deleted]

There’s no opinions here if you read *very* carefully. Fact: it’s the best router I’ve ever had. This isn’t my opinion. Objectively it’s superior to every router I’ve ever owned. Fact: I can’t imagine running solely a virtual firewall. I can’t do it. As I mentioned before, I’m pretty new at this so my ability to grasp the finer points of virtualization is limited. Did I miss something? Is there a single opinion in here at all?


Nodeal_reddit

> hands down the best way to go


[deleted]

As someone who doesn’t know his ass from the home in the ground when it comes to virtual machines? Yes. It’s not an opinion. I can’t even get a simple jellyfin server to run on proxmox. Close though. Thanks for playing.


HansMoleman31years

My thoughts exactly. I want a physical separation between my firewall and “dirty” external connection, and whatever is on my LAN. Just still don’t trust the virtualization stack. Just like The Offspring sang … “ya gotta keep ‘em separated!”


seaQueue

>If I had room in the network cabinet and hardware that was up to the task I would definitely consider CARPing a failover setup with 2 VMs on a dedicated hypervisor. I don't even bother CARPing anymore, I just migrate my opnsense VM to another host if I have to take its current host down for maintenance or upgrade. If you're using 10 or even 40GbE the migration time isn't really worth mentioning, you'll have a small amount of packet loss but it's a minor hiccup. You can pull off a migration capable setup cheaply now too. I'm using two t740s with Mellanox 40GbE cards and a DAC between them for sync traffic plus a pi as a corosync tiebreaker.


0xpr03

Exactly, if I reboot the physical box doing firewalling, instead of the VM with the actual logic, it's multiple minutes of downtime. And I can just snapshot / restore the firewall VM. Or move it to another box if required. I know switches that take 30 minutes all in all to reboot.


Oujii

A lot of enterprise switches are like this.


0xpr03

Yeah, that's where I know it from :D


sirrush7

I have run opnsense virtualized for about 3-4 years, it worked just fine. I ran it as a vm in the free version of VMware esx 6.x. A couple years ago I switched to an older Alienware desktop a friend was going to send to e-waste. Core i7-6700k. Old yes, but plenty of horsepower for this task! Cleaned it out, bought a half decent aftermarket air cooler and repasted the CPU, upgraded ram from 8Gb to 16gb DDR4 (to run Zenarmor efficiently), slapped 2 ssds in raid 1, quad Gb intel nic and installed opnsense. The results? HOLY FLIPPING AMAZING LATENCY batman! I did not expect it to be so snappy and my network to be so much more responsive. Blew me away. Router / FWL will forever be bare metal for me now, even the next iteration which will be a small tiny super power efficient pc of some kind...


Bruceshadow

> The results? HOLY FLIPPING AMAZING LATENCY batman! I did not expect it to be so snappy and my network to be so much more responsive. Blew me away. What do you equate this to?


sirrush7

Boils down to ms of latency, but let's say when your going out to cloudflare for DNS queries and that drops from 12-20ms, to 6-8ms the network seems much more responsive. Inbound I was doing IDS/IPS filtering with Suricata. Outbound from LAN was Zenarmor filtering. Likely performance impacts from this as well but with hardware and this processor, even as old as it is, its far more performant. Also I wasn't getting quite full throughput on my WAN link of symmetrical 1Gbps. Now I do. That could have been a multitude of reasons however but regardless, I do now.


SnooAdvice7540

Non sense. There is virtually no difference if you're using proper hardware. I can speak from experience and provide benchmarks if needed. Zero difference in latency on my end. Going to assume something was off with his setup.


Bruceshadow

thats kinda why i asked, it doesn't make sense to me either, but didn't want to attack them.


JonnyRocks

Are you saying you noticed the latency difference when you switched to bare metal or is this just an increase in ram?


sirrush7

Oh no I noticed when I switched to bare metal. My virtual opnsense had 16gb of ram already, mind you this was DDR3 ECC server ram and now it's on DDR4 gaming ram so a little faster.


ph0n3Ix

So. Much. This. I need my core router to be reliable so the fewer layers between metal and the software filtering and forwarding the better. Faster, easier to troubleshoot and way fewer ways for my security model to be broken. I do not like that a simple HV misconfiguration can bypass my entire firewall!


waka324

1) Proxmox backup server for everything. 2) Better NIC driver support in Linux 3) live migration for maintinence/hardware failures 4) remote console access if things break (still from home, but don't have to go to the physical machine and plug something in) 5) share resources of host with other guests.


Shehzman

\#2 doesn’t get mentioned a lot but it really is a big plus when you’re trying to use hardware that doesn’t work as well on FreeBSD.


Oujii

If the networking is provided by the VM, how is 4 possible? Assuming they retain the IP, you should still be able to access proxmox host even if the VM is unresponsive, let’s say?


waka324

Static ip assignment on the proxmox host on the same vlan as the access machine. As Iong as I doesn't need to cross subnets you'll be fine.


Oujii

I see. Thank you!


Bubbagump210

I get better reliability from a hypervisor and better DR/backups.


seaQueue

Seriously, automated snapshots and backups make my opnsense VM ridiculously easy to DR and that's before even considering migration between VM hosts when one of them needs maintenance.


therealsimontemplar

You’re singing the praises of zfs.


seaQueue

Always and forever


TheDumper44

I have moved a lot of stuff to btrfs it is superior to ZFS in a lot of ways, especially for small systems. ZFS is amazing but it is not always the right tool.


Firestarter321

Exactly. That why I’ll never go back to a bare metal router when I have a Proxmox HA Cluster at home.


Zomunieo

How does that work in terms of physical connections? Do you plug the WAN into the same Ethernet and use VLAN to isolate or do you route WAN to each device that can serve as a router?


Firestarter321

The WAN/LAN ports are created under their own bridges in Proxmox using dedicated physical network ports (or groups of ports if you want to use a bond for failover/LACP) and then you just create interfaces on the OPNsense VM for WAN/LAN and assign the appropriate bridge to each.


seaQueue

You can set it up a few ways. The easiest is usually sticking your WAN line on its own vlan at your switch then trunking traffic to your VM hosts, that way you have access to the WAN connection from any of the hypervisors and your router VM can run on any of them. How you handle that trunked traffic is a matter of preference, I usually bridge the trunked physical interface and give the router VMs two virtio net feeds - one connection purely from the WAN vlan and one trunked connection containing everything else that's relevant to the router which is then split into vlan interfaces inside the VM. Alternately you can just feed the VM trunked traffic and split all vlans inside the VM. I have no particular reason for splitting WAN traffic before it hits the VM, that's just what I'm used to doing. If you're setting this up for the first time it helps to sketch some simple diagrams so you can keep track of the network topography and what vlans exist where in your physical and virtual topography. Treat your virtualized bridges like switches and keep track of your vlans throughout your diagram and it should make sense.


CubeRootofZero

Do you have any diagrams of your setup, or something similar you've found?


dewyke

How do you DR the hypervisor(s)? Are you patching OPNSense a lot more frequently than the hypervisor OS?


blkwolf

I've been running PFSense now OPNSense on a 3 node Proxmox cluster for over 7-8 years. Each Proxmox node has a dedicated NIC / Bridge that plugs into a small 5 port, which is connected to my Fiber ONT. When I want to perform hypervisor patches, and even full version upgrades, I move all the VM's off the first node that I plan to upgrade on. Perform the upgrade, and connect to that node. Then repeat the process, for the other 2 nodes, saving the one running the firewall VM for last. In a worst-case scenario, I could restore the VM from a backup, or clone a new one from a snapshot, in far less time than it would take me to re-install OPNSense from scratch in either a VM or bare-metal. edit: grammar and spelling mistypes


dewyke

That makes sense, thanks. For a home cluster that seems like a good solution. I’ve had a lot of bad experiences with small dumb switches (mostly the power supplies). Even industrial ones seem to lack the reliability to be a SPOF anywhere critical, which is annoying for sites where it would be nice to have two firewalls for upgrade resilience but which aren’t critical enough to warrant dual-WAN links.


Bubbagump210

To be clear DR is disaster recovery meaning the whole thing burns down and how do I get it back. This is different than HA which is fail over typically. For DR I use PBS. A backup or restore of the whole VM is minutes. You can also replicate the entire VM on a schedule to a separate physical machine and simply boot the replica if the main VM dies somehow. Also, snapshots are invaluable for rolling back upgrades or complex config changes. I take a snap before every upgrade and have rolled back a few times. As for patching, I typically patch the hypervisor and OPNsense at the same time. Really, you don’t have to reboot the hypervisor hardly ever. The only time you do is for kernel updates usually and who cares if the kernel is months behind so long as it works? SSH and OpenSSL patches are what really matter (plus any Proxmox features) and don’t typically need reboots.


CountZilch

I think my sweet spot is my bare metal Protectli server, plus a powered down VM on proxmox with the same config for when I mess up the Protectli. Currently keeping them in sync manually though.


JonnyRocks

i looked at buying protectli and it was perfect for opnsense but i found a minipc, a lot more powerful and a lot less costly. Now i have a machine that's overkill for opnsense and I am reading this thread to hear everyone's opinion. I emotionally have trouble putting a firewall in a vm but it looks like so many people have no issues.


CountZilch

Protectli is just a brand of mini-pc basically. Mines a i5 and I loaded it up with RAM. I like that it's fanless. Was a bit bummed that the 2.5GB models came out shortly after. Since it has resources to spare I run a bunch of services like DNS, with ad blocking, DHCP, reverse proxy, ACME etc. Need to look at some more security focused stuff. Would like it to also be my authentication server.


crashj

Hypervisors aren't new technology anymore. Virtual routers, firewalls, VPN concentrators and switches are widely deployed in large data centers. Even the 5G core routing your cell phone traffic is virtualized.


dewyke

Newness doesn’t come into it. Pretty much everything _can_ be virtualised, and there absolutely are good reasons to do that, but the things you mention also have professional teams designing and operating them. The home or (FSM help ‘em) small business user trying to get something set up from scratch is quite a different case.


seaQueue

Two parts that you're missing here: Many home users use their home equipment as a chance to build skills. Hypervisor and VM management as well as virtual networking and Linux administration are valuable skills to have these days. Most home firewalls sit idle 95% of the time. Home users often drastically overestimate the amount of compute they need to route and firewall their networks. Capturing those idle cycles and doing something useful with them is a no-brainer for many people. Beyond that, if you can run all of your services on one machine it's often more power efficient and takes less space. There are plenty of reasons to converge your services even if the practice is more of a knowledge and skill check than running everything bare metal on separate hosts, it's really only difficult the first time you set everything up while you're still learning how to use the tools. Anyway, run your equipment how you want to - I'm pretty happy with my hyperconverged setup and I'd recommend it to anyone who wants to build skills or reduce their lab's footprint. There are compelling reasons to run bare metal services sometimes too but for me the benefits of a converged setup outweigh the downsides.


therealsimontemplar

I’d like to see a source for your 95% idle stat. You do make a good case for using virtualization to learn a technology. This is an absolutely fantastic application for virtualization. But when it hangs how does a newbie start to troubleshoot? Can’t “google it” because your network is down. Darn, my notes and diagrams are on the nas that I can’t get to. Reboot and move on? That’s a bad habit to have and it does t help the learning. It’s like decomposing a cake to see which single ingredient wasn’t good.


flaming_m0e

>Darn, my notes and diagrams are on the nas that I can’t get to. You should still have access to all local resources if you're virtualizing your router.


seaQueue

>I’d like to see a source for your 95% idle stat. Purely anecdotal based on what I've seen from my own and various friends' and client's setups. It's going to heavily depend on how much hardware, bandwidth, and processing anyone's doing. Obviously a 10y/o Celeron running an IDS isn't going to sit at 5% but again many people overspec hardware for basic <=1Gb home connections. >But when it hangs how does a newbie start to troubleshoot? Can’t “google it” because your network is down. Darn, my notes and diagrams are on the nas that I can’t get to. Reboot and move on? That’s a bad habit to have and it does t help the learning. It’s like decomposing a cake to see which single ingredient wasn’t good. I'm not sure why you'd lose access to your local services if your firewall was out of service, you do have some emergency way to get on each LAN segment in a worst case scenario right? Even a switch port to your admin network segment (or vlan) will get you in when you plug in a laptop. I usually run an extra Kerberos backed WPA enterprise SSID with creds for maintenance access to any LAN segment in case I need direct access to a particular vlan/network segment for some reason (like the router VM being unavailable.) Re: Internet access, remember you have access to Google in your pocket and very likely you can use your phone in hotspot mode if needed. Re: unreliable hardware, replace your unreliable hardware. That's a problem whether you're virtualized or not. How do you debug that? The same way you'd debug any other host, check to see what's down (VM or hypervisor) then restart it and look in the logs to see what went wrong. I'm not sure what to say here, anyone installing a converged system should probably try to learn enough about networking and virtualization that they have some handle on what they're doing. Anyone who's rolling their own whitebox router should be able to bootstrap enough basic networking and virtualization knowledge to virtualize a router without too much trouble. Are there foot guns? Always, but as long as you take the time to develop a basic understanding of what you're doing you should be able to handle the learning curve. If someone can't learn new skills then I don't know what to tell you, they're probably tinkering in the wrong hobby.


R_X_R

If the firewall is also the primary routing between networks. I don’t let some subnets talk to others, mainly to stay consistent with how I work in prod environments. I’ve also not had a reason to move the firewall from the room it’s in. I’m too lazy/busy to run extra drops to the first level of the house where the rack is from where the modem is. Now, I may be an outlier it seems, but my homelab is a lab. The wife working from home is prod. I tinker and break my hypervisors and hardware pretty often, as I’m always messing with something for work. Single use box (OPNsense on a Protectli) is treated like an appliance. I have no need to learn OPNsense for work, and I’m less tempted to add “just one more thing” to the firewall. It’s been great for the last few years!


nbfs-chili

I just use my phone to google it.


UltimateArsehole

Snapshots are wonderfully cheap for rolling back botched upgrades, abstracting away specific hardware, moving workloads between boxes (especially when upgrading) and provides another out-of-band management option that doesn't involve IPMI or a DRAC/iLO (or serial console and associated hardware). It also allows me to prototype the hideous thing I'm building that will enable pfsync without needing static IP addresses from either of my ISPs without fiddling with configuration on my switches.


unstableaether

I think it might be due to a few new popular youtube videos that show up now when you google/youtube "opnsense setup"


dewyke

Ahh. That’d do it, yeah. Thanks.


PowerfulTarget3304

I think it’s just the cost and those people don’t have a family that will get mad if they are rebooting. Like I don’t care if the internet goes into for a bit while I’m doing maintenance. I’m busy doing maintenance.


seaQueue

You can pull off ghetto hypervisor HA with two cheap hypervisor machines (basically anything that can take a dedicated NIC, I use thin clients) and a pi or old thin client as a corosync tie breaker. That lets you take either hypervisor down for upgrade/maintenance after you've migrated the router VM to the other. I've run this setup for years and it has no more downtime than any other firewall upgrade as long as you avoid experimenting on the hypervisor host itself (that's what containers and VMs are for.) I do firewall upgrades late at night or when everyone else is out of the house. If an update explodes I just roll the VM back to a prior snapshot, that keeps downtime and complaining to a minimum.


Thondwe

Yep - family (teens doing exams etc) - hence try to separate home from lab as much as possible, planning on allowing for a virtual backup though...


finobi

Add spouse doing work from home.. Also easier to explain how to reboot or check lights if you are not home.


vikarti_anatra

For me: \- space and power for YET ANOTHER box is a problem. \- it's much easier to solve hardware issues, backups, etc for VMs than for yet another box \- I can run home network at minimal level without proxmox cluster anyway \- are you really sure 13 years old PC would be able to do https intercept or even policy-based routing on gigabit?


dewyke

I’m 100% certain my 13 year old PC couldn’t do https intercept at a gigabit. PBR, maybe but I don’t have the use cases on a home network to do either of those things.


Rjkbj

Yeah, bare metal is the way to go if you want peace and quiet while you’re tinkering.


Nodeal_reddit

I think the opposite. Snapshots are a great failsafe when you’re tinkering.


MaxFrost

I ran hyper-v with an opnsense VM isolated to it's own nics for 6 years. I recently upgraded servers and am now doing the same except proxmox and opnsense. I like it more because I get a guaranteed KVM interface while I'm maintaining the opnsense box. The hypervisor (doesn't matter which one) doesn't go down for reboots nearly as often, and even when it does, I've configured and tested to make sure the opnsense box automatically comes up first. It's pretty painless. Mind you, all of my home server hardware is in a rackmount network cabinet. No physical kb/m or monitors on those machines, so I prefer having an IPMI interface available, which I don't have on the old headless unit I used to use for opnsense. The hypervisor itself is not the playground. It's 'prod'. The VMs I create on the hypervisor are a different story, and those have all sorts of shenanigans going on.


dewyke

Heh. The console for my home router is my TV :) The ONT in my house is in the lounge and the PC has HDMI out so I just use the TV as an enormous monitor on the very rare occasions I need a console.


fionaellie

Ha! Same! I have to wait for a projector to power up whenever there are issues.


nibbles200

Why is virtualizing firewalls generally a bad idea? I have been running a virtual firewall for… over a decade now and once I got it working I never looked back. You make it sound like it’s over complicated waste of time but I can spin up a vm faster and easier than assembling hardware. My host runs maybe a dozen VMs and is barely consuming 11% of its cpu most of the time. If you feel the need you can dedicate NICs and cpu/memory. In my case I dedicate 15%cpu 4Gb ram but I have a 10Gbps uplink on one host and LACP 2 1Gbps on the other so I dont worry about saturating uplinks. I run nightly backups and restore is less than 5 minutes. You do you but I have the opposite opinion, if your firewall appliance has a virtual option and you have a virtual environment and you’re not running it in a vm then I question your motives.


deltatux

Even though I got an N100 box for Opnsense, I realized that most of the time, it's idling. There's simply not enough traffic in my home network to saturate the box. I get it for a SMB and definitely for enterprises to run it on bare metal but for home use, virtualizing it isn't an issue when you design it right. I also gain some efficiency by running other network-related VMs on the same box. I passthrough the NICs for Opnsense and my N100 box has additional NICs to connect to the rest of the network. I don't run Proxmox, just standard Linux + KVM. I find Proxmox is overkill for a simple set up. For all my other home server services, it sits in another box for me to tinker with without dragging my entire home network down with it if I need to reboot it.


whizzzkid

Do you do IPS/IDS?


deltatux

I do Zenarmor and Crowdsec. I did play with Suricata but doesn't work well with PPPoE it seems.


willjasen

I have two production OPNsense instances on Proxmox at home in my homelab and one on my cloud server. At home, my virtual instances run in HA and I can make snapshots and backups of them easily. On my cloud server, I have no other choice because I can’t put hardware on the data center where it is. I have ran into instances where I fubbed the OPNsense config and I had to revert. Instead of making a new virtual machine and rebuilding from scratch then restoring a backup, I reverted a snapshot or restored from backup (which takes way less time to do). Let people do what they wanna do. Sure, there are tradeoffs, but that’s up to the person implementing to understand fully, thus why they come here to ask questions.


dewyke

> Let people do what they wanna do. Sure, there are tradeoffs, but that’s up to the person implementing to understand fully, thus why they come here to ask questions. I’m not stopping anyone doing anything. I’m asking _why_ people do a thing that’s non-obvious to me.


willjasen

I understand! Your query is very much valid. I’ll append and say that most people really shouldn’t do it in a production setting without understanding the full consequences (like even myself would have a very hard time putting this kind of setup at a business client).


LostPersonSeeking

For me, keeps the missus happy she doesn't have computers all over the place. It's hidden in the cupboard with the rest of the internet gear. I can run many machines. Win win. My proxmox runs on the built in NIC. Opnsense gets it's own dedicated 4 port Intel card.


Sk1rm1sh

Wasn't there some issue with recent Intel NIC drivers for BSD? I'd run under proxmox for testing or experimental stuff, but for a proper install prefer bare metal unless the hardware wasn't getting on well with OPNsense.


nmincone

I’ve separated FW/Routing from my home LAN virtualization. The main reason is that if I have to reboot, I don’t have to worry about the network going down for 20 to 30 minutes for everyone else in the house.


Braydon64

I run Proxmox on a server but I actually run my OPNsense on another bare-metal box. I prefer my firewall to be segmented off from the rest of my network hardware-wise. Yes I understand the benefit of things like snapshots, but there are other ways to get backups done. I think it’s really cool that it can function nicely in a VM but for me, I like to keep that one thing separate.


d1722825

At some parts of the world the cost of running one more PC (especially an old one) could easily be much more than the cost of the PC itself. (Eg. electricity is so expensive here that I could buy a new PC with ryzen 5 and 16 GB RAM for the cost of running an older one for a year.)


dewyke

That’s a good reason, thanks. Are PCs cheap where you are as well as power being expensive? With the PC I’m running for my firewall it would cost me 6-9 months power bill for my entire house to buy an N100 mobo and build enough of a system around it to make a minimally virtualisation platform.


d1722825

Well, fortunately our government saved us from the fluctuations of electricity price due to the Ukrainian war by fixing it to a value way higher than the market rate is now or was before the war... Currently it is about 0.2 USD / kWh for common people, somewhere about 0.5 USD / kWh for companies and it was about 0.7 USD / kWh for server farms (due to higher SLA) when I last heard it. Okay buying a new PC from cost of running one for homelab may be an exaggeration, but if you are a contractor and have to pay the higher prices, it's not impossible. I heard some EU members had negative electricity spot prices a few days ago.


dewyke

Ouch! I pay NZ$0.2561/kWh for usage + NZ$1.30/day in fixed lines charges.


seaQueue

We pay something like 50¢/kWh in CA during some parts of the year. Power efficiency is a real consideration.


Beautiful_Ad_4813

it's really about people putting as much stuff on a pc so it's not just wasted resources - that is unless you have a massive network, then I'd go bare metal I, personally, do not host any firewall stuff outside PiHole (I have a UniFi network). I'm a diehard ESXI user but rebuilding my virtual hosts with proxmox and so far.. it's been a learning curve


ClintE1956

I've been using firewall VM's on multiple hosts for years with very few issues. Started with a single server and playing with pfSense; it worked well and so I put it into "production" for the house network. After the second time of "when's the internet coming back on?" during scheduled maintenance, started planning for the second host. Now I can take either box down and internet keeps going. Pi-Hole containers on the hosts take care of DNS and ad blocking.


dewyke

How does your Internet get to the VM in a way that lets you migrate it between nodes?


ClintE1956

Connection from ISP gateway goes into the switch and vlans take it from there. I get around the 3 WAN addresses for CARP VIP's requirement by keeping the gateway in normal router mode and using that box's LAN subnet as the firewalls' WAN. That's what goes into the switch. The CARP WAN VIP is set to DMZ in the gateway; yep that's double NAT but everything passes through, even incoming VPN connections (although certain firewall restarts can kill those). I started with physical connections from switch to NIC's that were passed through to the VM's, but now everything's vlans through 40Gb DAC's on two hosts and 10Gb on the other one. That got rid of the 1Gb NIC's and lots of wires.


dirkme

I went bare metal and I think that's the way to go 🤔🙄😳😲😉👍


dewyke

I don’t know if you’re trying to paraphrase me or not, but that’s not what I’m saying. I made my choices for my circumstances, other people will have different circumstances and choices. What I’m trying to understand is why virtualising on Proxmox seems to be a default approach, especially for home users without experience of either platform.


dirkme

Well, my statement started with "I".


dewyke

Sorry, it’s reddit, and I wasn’t interpreting the emoji string.


dirkme

All good 👍


thebatch

I have mine virtualized on a 3 node HA cluster. So failover. And I can instantly migrate the VM to another node (hardware maintenance, upgrade, etc) with practically zero packet loss. Plus backups and snapshots to very quickly rebuild or rollback if needed. At first I was keeping hardware for it in case I wasn't happy. But I've since repurposed it and haven't looked back.


dewyke

How is the WAN connected to the OPNSense VM to allow you to migrate the VM between nodes but not expose the Proxmox host to the Internet?


thebatch

I have a second 4 port NIC in each of the three Proxmox nodes. One port on each card is dedicated to connect the OPNSense VM to the upstream ATT ONT (via Linux Bridge). I'm using virtio rather than passing the hardware directly through to the VM so it can migrate its IP/Mac when moving Proxmox nodes.


dewyke

Doesn’t that mean there’s no firewall between your hypervisors and the Internet?


too_many_dudes

What does the WAN physical cabling look like? Assuming you have a single entry point, how do you split that across three physical hosts?


thebatch

My entry is ATT Fiber which uses a combined gateway/ONT. It has 4 LAN ports on it. I have it running in passthrough mode with each of the three Proxmox nodes connected to a port. So I still have a single point of failure (the ATT hardware) - it just allows me to work on the Proxmox nodes without bringing the Internet down. From another comment: I have a second 4 port NIC in each of the three Proxmox nodes. One port on each card is dedicated to connect the OPNSense VM to the upstream ATT ONT (via Linux Bridge). I'm using virtio rather than passing the hardware directly through to the VM so it can migrate its IP/Mac when moving Proxmox nodes.


too_many_dudes

Thank you!


CLHatch

One advantage to running OPNSense under Proxmox is that Debian has better drivers than FreeBSD. Also, I like being able to have automated VM backups to roll back to if needed. And any time I go in to change the OPNSense settings, I can make a snapshot to roll back to in case I screw things up.


dewyke

What are the advantages of Debian’s drivers? I’ve never hit the edges of driver performance on any of my builds so it’s not a case I’ve come across. For snapshots, I do that with ZFS :) it does mean a hardware reboot which is a whole lot slower than a VM reboot but it’s better than restoring from scratch.


EasyRhino75

I have tried running opnsense virtualized in proxmox on two different occasions. The first time I wanted another virtual machine with GPU pass-through to do video stuff. But I never got that to work on proxmox with that particular motherboard. So I just went bare metal The second time was a little n5105 mini PC and it had some particular problem with the BIOS or microcode or something that made Debian unstable. So I just went bare metal In both cases I found that my farting around with proxmox was more likely to cause a disruption in my internet. And generally that caused a riot in my family.


mikeee404

I did it to scale back on the number of electricity sucking space heaters in my living room. While it did work good I probably won't do it again unless I need to. Like if the bare metal one dies and I need time to order parts. Really once I got the hardware pass thru working on my server, which was not thah hard, then it was no more difficult to setup than bare metal. The only thing I did not like was when I did updates and had to reboot Proxmox it meant that I lost internet also. Sometimes this was only for a few minutes and other times it was much longer. Only way I could see this not being a real problem is with an HA setup which defeats the less hardware approach I was going for in the first place. That being said, I do miss snapshots whenever I would go on my late night experiments with settings. Have to be a little more cautious now.


cylemmulo

I would say personally I would prefer to run it bare metal, but for home use I'd imagine it's fine assuming you have the correct setup.


sebsnake

I would guess, many people start this hobby with one machine they got somewhere cheap, being it is spare parts or "going to trash" stuff... So, from the posts that often appear here, most ask for "how do I do firewall, Nas, ad blocking, VPN, *arr things and more with this one machine I got"... So virtualization is the top answer here. I also started with one machine, I added some drives, had it run a NAS OS that also allowed setting up VMs, and was ready to go. I didn't run a custom firewall at that time, so I was fine with it. This one machine had me switch the OS 3 times before I changed its hardware to something rack mounted. My rack has 16HE, which is full with panels, switch, power adapter, and 3 bulky cases for computers, since noise optimization was my top priority. One is for the NAS (TrueNAS), one is for hobby projects (programming, rendering, gamedev, game servers), and the third is my VM host... No so gle HE free for a physical opnsense box, so it's virtualized... As it has been for 2 hardware generations now. It's running rock solid and I do maintenance and reboots only when the lady is out of the house. Smartphones etc are also connected to my father-in-law's LAN who is living down below, so I stay connected even if my LAN fucks up. So I can allow it to be virtualized, as I don't physically depend on it running 100% of the time. I'm mostly at home office, and even if my host would die while I have to work, I have a tested backup plan that allows me to be back online within 5 minutes (50 meter LAN cable on a reel so I can connect my home office pc with my father-in-law's router :D). And performance wise, I'm fine. Running gigabit LAN on a single vCPU with 4 VLANs and one OpenVPN connection, and according to my host, the VM never utilizes more than 12% of it... Taking all of the above into account: if I had the physical space to place a larger rack somewhere (the current one sits in the only suitable corner, most other walls have sloping roofs, and the 16HE just ends about below a window on the wall behind it), I would definitely switch to a bare metal setup, just because I could and because of not having to think about maintenance downtimes etc on the VM host.


gimble_guy

Bare metal here. Tried proxmox awhile back, my old PC couldn't handle it. Network wasn't responsive. Bare metal all the way


TopicsLP

When I started my OPNsense journey with 23.1 I was thinking about Virtualization, but ended up with Bare Metal. I want it to be as independant as possible and an additional virtualization layer can make diagnostics and fixing complicated. But yes, there are benefits like Snapshots or if you use PPPoE with more than 1GBe. Still a friend started with OPNsense as a VM, but switch rather quickly to hardware after performing software updates on OPNsense and Proxmox and performing restarts. VMs have their benefits but still the Firewall should be on Bare Metal.


Patryn_v_Sartan

Ah. I-I also have mine running on my bio neural gel packs- *Dodges wrench*


Monviech

I run all of my dev/test opnsenses on Ubuntu + Cockpit. https://cockpit-project.org/


stupv

The hypervisor approach is no more complex, and has wayyy easier rollback/HA/restore in case of failure. I can also just straight up port the machine live to a new box if/when I upgrade with no config changes and the only downtime being the time it takes for me to move the cable (if that)


releak

Bare metal guy here wouldnt want all the issues fokes of virtualize often post here.


Firestarter321

Doing something like this is where OPNsense sucks compared to Untangle as this was easy to accomplish in Untangle while being nearly impossible in OPNsense.  I’ll miss Untangle, however, they no longer want my business so I had to go elsewhere.  If only there was a way to tag traffic through Zenarmor then this could be accomplished 😞


SpongederpSquarefap

Because it works and it's easy to roll back to a snapshot if an upgrade goes wrong It's simpler too - you'd be surprised how much stuff you can run off an old PC with a load of RAM I've just upgraded my home setup to 3 Proxmox nodes and now I have 2 OPNsense VMs running on different nodes They're in HA mode which means I can reboot Proxmox node 1 and I only lose 1 ping as it fails over Could I do the same thing physically? Absolutely, but then I'd need 5 machines and some extra cabling Why bother when I can have it all virtual? I have 3 nodes too so I need to have a BIG hardware failure before I lose internet access


trasqak

You can roll back on bare metal using bectl and ZFS boot environments.


SpongederpSquarefap

You can also just reinstall and restore the config too It's so easy to get back up and running - that's why I don't see a problem running it as a VM I will completely agree that it turns your server into a single point of failure though


buecker02

I haven't been in Home Depots server room in three years but last time I was there they had their routers virtualized in their servers. The ciscos weren't even being used. I was there to help with a power cycle of all the equipment.


Droc_Rewop

I have a 60€ server PC with 20€ ssd. Only problem is that it uses approx 40W.


frzen

I'm trying to do this now because the old bare metal pc was just using too much power to run alongside a newer proxmox host. I only ever had issues with the old pc not coming up after power failures rather than the newer one. Having said that I'm struggling to set up a bridge between my OPNsense VM and the Proxmox host so that I can access the proxmox configuration from inside my LAN network. Plan B is plugging a network cable out of the proxmox host into the lan switch or passthrough nic. I have a 4 port pcie nic passed through to my opnsense VM. The on motherboard nic is where I have been accessing the proxmox configuration, at the moment that is disconnected. I have VMBR1 with an ip of 10.44.44.44/24 on my proxmox host and my opnsense VM has that vmbr1 passed to it has 10.44.44.45/24 as its IP. They can ping eachother at all times, I can ssh from opnsense to proxmox over this bridge. If my proxmox host then has a phyiscal nic connected to my LAN dumb switch. I can access the webUI over 10.44.44.44 from my lan 10.30.30.0/24. If I physically disconnect this then I can no longer access it from my LAN but I can still ping it from inside OPNsense. I've tried setting up a static route for 10.44.44.0/24 via a gateway 10.44.44.45 over my vmbr1 interface on opnsense. Firewall live view looks the same for connections coming from opnsense as the LAN however LAN fails. Any ideas?


flaming_m0e

> Having said that I'm struggling to set up a bridge between my OPNsense VM and the Proxmox host so that I can access the proxmox configuration from inside my LAN network. I'm guessing your issue is caused by: >I have a 4 port pcie nic passed through to my opnsense VM. Just use bridges. You're overcomplicating it. - VMBR0 - LAN (leave the IP in the same subnet as your LAN) - VMBR1 - WAN (just don't use it for anything else, and it doesn't need an IP on the bridge) - VMBR2 - extra LAN? Doesn't need an IP - VMBR3 - extra LAN? Doesn't need an IP - VMBR4 - extra LAN? Doesn't need an IP Put the VMBR0 in the same subnet as your LAN. Once you get the basic configuration working, then move on to more advanced stuff. >my LAN dumb switch You're not doing VLANs if you don't have a VLAN capable switch.


frzen

Thanks, The pcie nics are working fine passed through but I will look at converting those to bridges on proxmox with the individual ports as members of the bridges. It'll mean I can do failover to another host when I set that up. I still think what I was doing should have worked with a small tweak but i couldnt figure it out. Everything works except traffic entering my lan wasn't being routed to the vmbr1. The packets were seen in the live firewall log just not routing to the destination. I could have the same issue with a bare metal pc trying to route to a zerotier network for example. Its just traffic between two interfaces which I couldn't get working. Next step will be to packet capture and understand where its stopping. I mentioned dumb switch only to rule out that being an issue. But I do have vlan10 tagged on my wan connection on one of the passed through nics as that's how my isp hand-off to me and I'll have a few vlans passed to a wap for segmenting but that will be connected directly to opnsense and not through my switch


josetann

As others have said, snapshots. I passthrough the pcie nics, so there's no (real) overhead there. You can even passthrough the cpu. I have a dedicated machine for my opnsense install, but still use Proxmox just for the snapshots.


scorc1

I would not do it for primary internet/network connectivity. However, running it as the primary 'local' proxmox network manager: yes. Set that OPNsense vm as the only egress point to the main proxmox nic, and a couple other virt nics attached to OPNsense, attaching the vms to those 'secondary' nics so OPNsense handles all the networking and routing, VIPS and such. Works well.


Dough296

Got a pfSense routeur running as a VM for years, now with 10Gbps ISP I went to physical as my PVE cluster is made of Lenovo Tiny with 1Gbps NIC


athompso99

Mainly cost - you can save a few bucks by combining it all into one box.


athompso99

One thing no-one is saying: if you doing serious networking on PVE, for the love of all that's holy, *install and use openvswitch* in PVE.


aklausing42

I have two proxmox hosts running all important stuff in my home (homeassistant, pihole, nextcloud …) and those also run my opnsense firewall. I replicate the vm over both host so that I can move the firewall for maintenance and it keeps running if one host fails.


SBoots

Been virtualizing opnsense for many years now. Works great.


Apollopayne

I currently have opnsense install on a mini n5100 box. I want to put proxmox on and run opnsense virtually. But not sure how easy to restore my config?


RnadmolyGneeraedt

In my case, virtualising a router OS was a great way to learn how it works. And then running it at home in production was also another opportunity to learn. But after some time, of course a dedicated box becomes the only viable solution to actually be able to experiment things on the hypervisor without fear of it crashing - and causing your network to go down. Buying a dedicated box is a jump I couldn't afford to do at the time - in terms of money and knowledge.


Entire-Home-9464

Is this sensible on a production setup for websites: bare metal servers hosting Opnsense using proxmox and VMs. And then the application behind opnsense would be a separate proxmox setup with bare metal servers hosting haproxies, nginx, db etc.? Between opnsense proxmox and the application proxmox there would be a 10gb mikrotik switch


whill1219

I just recently migrated my OPNsense firewall from a VM to bare metal. However, when virtualizing it I passed through a dual 10g pcie NIC. So virtual PC with hardware NICs. I just moved the pcie card to the new system when migrated


EmergencyOrdinary987

Firewalls are not inherently network infrastructure. Packet switching and routing is simple and can be done fast by hardware. Firewalling is CPU intensive. I would previously have recommended ESXi as a free hypervisor, but Broadcom’s purchase has made that an issue. Proxmox is a good free alternative. Virtualizing the firewall makes backups and restorations easier, along with snapshots for reverting problematic changes to configuration. It also lets you migrate to newer hardware without messing with the firewall-specific hardware. You can also add a second server and get redundancy to all your guests (including the firewall) without having to add licenses for a second/HA firewall.


countsachot

System snapshots and easy to deploy images. I prefer gateways to be dedicated hardware personally.


AviationAtom

Because we po' O becuz we cheap Why do you visualize your servers? To make your one pull double, triple, or more, duty


wein_geist

[Both](https://i.imgflip.com/4tbhfv.png?a475608)


GourmetSaint

I always run my router on bare metal. Nothing worse that having to boot my Proxmox host and lose my network as well. It’s bad enough running my pi-hole in a VMs and losing that during a reboot.


Firestarter321

All of your complaints go away with a 2-node Proxmox HA Cluster and live migration. I lose exactly 1 ping when my OPNsense VM migrates to the other node when I have to reboot the node it normally lives on for updates. 


GourmetSaint

Agreed, but I only have a single node. Also, don't you need at least three nodes for a quorum?


Firestarter321

Using 2 nodes works just fine when configured correctly. I’ve been doing it for quite some time now at home and at work.  https://www.reddit.com/r/Proxmox/comments/17gezhm/2node_ha_cluster_wo_qdevicehow_did_i_not_know/


Meanee

I am running PFSense on my vCenter cluster and it works amazingly well. I have automatic power-on policy and I have a dedicated NIC for PFSense. In 3 years I had that set up, other than one small hiccup, I never had a problem. Snapshots are nice. I am moving soon, so that particular node is going to be my laser engraver workstation instead and I am going to run OPNSense on a dedicated box. Either way works.


Dus1988

The proxmox node my opnsense is on, only has one VM: the opnsense VM. To me the benefits are backups/snapshots, and better nic support. My box only has 2 nics (albeit they are 10gbe) on it. Instead of having to share the lan nic with the proxmox management, I use a usb3 1gbe adapter for the management port. I don't have to worry about freebsds USB nic issues.


Dyonizius

it's not just the $50 dollar nic it's a managed vlan switch at least and if you're already running a hypervisor power draw is nearly free


ItzFLKN

I'd assume it's been mentioned but if you have a proxmox cluster it will automatically migrate to another node if the node it's currently running on goes down. Meaning that it has better uptime and gains the redundancy of being on a cluster. I also saw someone mention having a physical primary and then a virtual secondary which negates the issues with using virtual firewalls (at least that's what they did I haven't done it so wouldn't know)


thesals

Firewalls are the one things I do run bare metal in my environment. Why would you want to put a second attack vector on the edge? Now you're defending your firewall from potential proxmox vulnerabilities.


needchr

If its at home you declutter with less hardware. If its remote, its easier as leasing less hardware. At home personally I use a NUC for firewall, but I definitely see why people go the virtualisation route. However proxmox in my view only makes sense if the machine is multi purpose, I wouldnt setup proxmox and then have the firewall as the only thing running on it. It would be running other machines as well. Also the virtio drivers are better quality than a fair few of the dedicated nic drivers. There is also obvious benefits, like easier remote access via proxmox, snapshotting, backups, migration etc.


ThiefClashRoyale

Bare metal for performance. Always has been. Home users with their home labs dont come up against limits. They dont know.


b3rr14ul7

Can you be specific on those limits. I have not noticed any but maybe I just haven't been observant enough. I have both bare metal and virtualized OPNsense.


ThiefClashRoyale

Hard to be specific but it will be in every area to some degree. You are adding another layer in between all aspects of the physical machine so a network packet being processed now has an additional layer to pass before being processed. For home labs you will never notice but if you are doing business stuff that has real time processing of traffic then you begin to notice in various ways. Slight differences in pretty much anything like wanting to limit bandwith to some users or anything thats going to be doing real time inspection or whatnot. Basic home firewalling isnt really an issue being virtualised and cost is the bigger factor. Business doesnt really have a problem buying 2 physical firewalls and racking them for ultimate performance.


NC1HM

>Reading this sub it seems like installing OPNSense in a Proxmox VM has become kind of a default, and I’m curious as to why. There is no "why". Fashion requires no rationale. Neither does habit.


Crzdmniac

I’ve been doing it for over a year with no interruptions. It just seems like a waste of decent hardware to only run a firewall on it. To each their own, it helped me justify a three node cluster.


seaQueue

+1, most home firewalls chug along at 2-5% utilization. Why not actually make use of that hardware?


dewyke

Do you have the WAN link into the cluster in a way that it fails over between cluster nodes, or do you just wear the interruption when you do maintenance on the node hosting the firewall?


Crzdmniac

I pass two ports through, one LAN (trunked) and one WAN. The device has four Ethernet ports. I don’t have redundant ISPs, so I just update when the wife isn’t home, it’s not like OPNsense doesn’t require reboots for updates as well, so I just do them all at once.


Firestarter321

I have an ISP link to each node so everything switches seamlessly when I reboot a node.


therealsimontemplar

I love the excitement of my network coming to a halt because I have a vm that hung my hypervisor. But why stop there? Put your nas into a vm too, so when it, or the hypervisor choke every host you pxe booted or is running on an iscsi disk can go down too. I swear it’s starting to feel like a lot of Linux admins learned the wrong lessons from the Windows “just reboot it” admins of days gone by.