Introduction
An overview of our implementation
An overview of our implementation
For the purposes of this documentation, we will primarily be concerned with the network and switching fabric itself. Although we will reference and detail many of our various devices, construction of those will largely be left as an exercise to the reader.
We will be implementing two nominal routers for this network. Firstly, our Edge router will be responsible for tasks related to the WAN. Additionally, our Internal router will be the gateway for our networks and serve as our primary network firewall.
Broadly speaking, an edge router is the device that accepts inbound traffic into the network, and directs outbound traffic to your ISP or peerings. Edge routers are well placed to handle QoS and mitigate bottlenecks from the core network.
For example, if we run Differentiated Services Code Point (DSCP) classification within the network, we can create a service policy at the Edge that queues traffic based on priorities we define. Perhaps we have a media server that streams to the WAN, but we want to set our personal desktop to have priority over that bulk stream traffic in the event of congestion.
Additionally, edge routers can serve as an entry point into the LAN via various VPN schemes, allowing a remote client to land on the internal network, but still be classified as “external” traffic on the interior firewall (as from the firewall’s perspective, that traffic originates from the Edge router).
Ideally, we’ll pick a device that can run our router operating system of choice (VyOS), that has a relatively low TDP, low-noise or no-noise, and at least two Gigabit Ethernet ports. Naturally, that gives us a wide set of options.
The internal, or core, router serves multiple roles in our network. Not only is it the default gateway for virtually all of our network segments, but it also handles the bulk of our firewall rules, handles internal DHCP, maintains our dynamic DNS, and a variety of monitoring. The core router sits directly in the middle of our network, and one of our main goals is to direct virtually all traffic through it. Running all of these processes can potentially consume a lot of resources (CPU/RAM), which is another reason why we keep it seperate from the duties of the Edge Router.
Our switch should be a device that is managed, allows 802.1q tagging (VLAN tagging), and link aggregation. Some additional features we might be interested in are DSCP based QoS and LACP (Link Aggregation Control Protocol) support.
More details about Wireless later to come.
As stated above, this section is purely examples of what I have. These devices are not mandatory but are listed for posterity.
I have a whiteboxed Linux system that handles a ZFS RaidZ2 (raid 6) 6x8tb array for me. I’ve got a 4x1GbE Intel NIC in it, and having a 4-port LAG to the switch is a priority. This device doesn’t run any services beyond NFS/SMB, but ZFS is fairly RAM hungry, so it runs an older Ryzen 5 I have.
Another whiteboxed server, this time a VMWare ESXi host with a small Intel Xeon in it. This is the primary host for many internal services, media services, and lab VMs. It also has a 4x1GbE, but regretably doesn’t support LACP, so we’ll be using a standard LAG here.
NEMS is a Raspberry Pi-based Nagios server, and will be a primary monitor for our entire architecture. I like NEMS because it’s compatible with both the Pi and the little 5" TFT screen that I have. This means the server closet has a physical monitor with visible alarms on it, should I happen to be in there breaking things. I could easily just virtualize Nagios on the hypervisor, but I like it being standalone that way it can monitor the hypervisor also.
PiHole is a DNS sinkhole that will be blocking ads on our network at the DNS level. Both of our internal name servers will be forwarding requests to the this. This doesn’t have to be a physical device, you could easily just virtualize it. I choose to keep it physical for the same reasons I keep NEMS physical - I don’t want the status of the hypervisor to affect all outbound DNS. Note that you could have two of these for redundancy, or even virtualize one of them. I currently just run a single one.
My primary DNS (NS1) is a virtual machine on the hypervisor. The secondary (NS2) is also on a Raspberry Pi running the Pi-flavor of Ubuntu. Both of these servers run bind9 and the master (NS1) has an allowance to transfer zones to NS2. Both of these could be virtual, but again, redundancy.