I am not overly happy with my current firewall setup and looking into alternatives.

I previously was somewhat OK with OPNsense running on a small APU4, but I would like to upgrade from that and OPNsense feels like it is holding me back with it’s convoluted web-ui and (for me at least) FreeBSD strangeness.

I tried setting up IPfire, but I can’t get it to work reliably on hardware that runs OPNsense fine.

I thought about doing something custom but I don’t really trust myself sufficiently to get the firewall stuff right on first try. Also for things like DHCP and port forwarding a nice easy web GUI is convenient.

So one idea came up to run a normal Linux distro on the firewall hardware and set up OPNsense in a VM on it. That way I guess I could keep a barebones OPNsense around for convenience, but be more flexible on how to use the hardware otherwise.

Am I assuming correctly that if I bind the VM to hardware network interfaces for WAN and LAN respectively it should behave and be similarly secure to a bare metal firewall?

  • Illecors@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 months ago

    I’d been running OPNsense in a VM for some time. I used xen as a hypervisor, but that shouldn’t really be a requirement. Passed the nics through and it was golden! All the benefits of a VM - quick boot-up, snapshots on the hypervisor - it’s truly glorious :)

    • poVoq@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      Sounds great. What about hardware acceleration features of the NIC? I read somewhere that its better to disable the support for that in OPNsense when running it in a VM?

        • poVoq@slrpnk.netOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          I just saw that option. What would be the advantages and disadvantages of this?

          I guess when I pass the actual NIC device the hardware acceleration should work?

  • carzian@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 months ago

    So you’re planning to reuse the same hardware that the firewall is running on now, by installing a hypervisor and then only running opnsense in that?

    • poVoq@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      It is more powerful hardware with much higher single thread performance which should help with OPNsense networking; Ultimately to allow more than 1gbit WAN input which my current firewall hardware is incapable off, although that is still in the future.

      But I feel like I could utilize this hardware better if it was running something other than OPNsense, thus the idea to make it run it in a VM.

  • Admiral Patrick@dubvee.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    6 months ago

    Am I assuming correctly that if I bind the VM to hardware network interfaces for WAN and LAN respectively it should behave and be similarly secure to a bare metal firewall?

    Correct.

    I did that in my old playground VMware stack. I’ll leave you with my cautionary tale (though depending on the complexity of your network, it may not fully apply).

    My pfSense (OPNsense didn’t exist yet) firewall was a VM on my ESX server. I also had it managing all of my VLANs and firewall rules and everything was connected to distributed vSwitches in vmware… Everything worked great until I lost power longer than my UPS could hold on and had to shut down.

    Shutdown was fine, but the cold start left me in a chicken/egg situation. vSphere couldn’t connect to the hypervisors because the firewall wasn’t routing to them. I could log into the ESX host directly to start the pfSense VM, but since vSphere wasn’t running, the distributed switches weren’t up.

    The moral is: If you virtualize your core firewall, make sure none of the virtualization layers depend on it. 😆

    • poVoq@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      Thanks for the quick reply.

      What about the LAN side: Can I bridge that adapter to the internal network of the VM host somehow to avoid an extra hop to the main switch and back via another network port?

      • Admiral Patrick@dubvee.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        6 months ago

        May depend on your hypervisor, but generally yes. Should be able to give the VM a virtual NIC in addition to the two physical ones you bind, and it shouldn’t care about the difference when you create a LAN bridge interface.

        Depending on your setup/layout, either enable spanning tree or watch out for potential bridge loops, though.

  • gray@pawb.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago

    If you have a managed switch you can also just do vlan tags for your wan and not have to pass any nics to the VM.

    • poVoq@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      Yeah, I though about that, but that sounds like a footgun waiting to happen.

      • gray@pawb.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        I’ve been doing it for years, no issues. It’s fairly common in the enterprise as well.

  • Smash@lemmy.self-hosted.site
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago

    I use a Proxmox Cluster and assigned dedicated NICs to my OPNsense VMs (also clustered). I connected the NIC ports I assigned to the OPNsense VMs directly with a cable and reserved it for CARP usage. I can easily download with 1GB/s and the VMs switch without any packet loss during failover, 10/10 would do it again.

  • mb_@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago

    I can’t remember all the details, but depending on the CPU you are running you may need some extra configuration on opnsense.

    There were a few issues, on my servers, running on older Intel Xeon CPUs, but I eventually fixed them adding proper flags to deal with different bugs.

    Other than that, running on a VM is really handy.