Hardware

For the purposes of this documentation, we will primarily be concerned with the network and switching fabric itself. Although we will reference and detail many of our various devices, construction of those will largely be left as an exercise to the reader.

Routing

We will be implementing two nominal routers for this network. Firstly, our Edge router will be responsible for tasks related to the WAN. Additionally, our Internal router will be the gateway for our networks and serve as our primary network firewall.

Edge Router

What is an Edge Router?

Broadly speaking, an edge router is the device that accepts inbound traffic into the network, and directs outbound traffic to your ISP or peerings. Edge routers are well placed to handle QoS and mitigate bottlenecks from the core network.

For example, if we run Differentiated Services Code Point (DSCP) classification within the network, we can create a service policy at the Edge that queues traffic based on priorities we define. Perhaps we have a media server that streams to the WAN, but we want to set our personal desktop to have priority over that bulk stream traffic in the event of congestion.

Additionally, edge routers can serve as an entry point into the LAN via various VPN schemes, allowing a remote client to land on the internal network, but still be classified as “external” traffic on the interior firewall (as from the firewall’s perspective, that traffic originates from the Edge router).

Hardware Selection

Ideally, we’ll pick a device that can run our router operating system of choice (VyOS), that has a relatively low TDP, low-noise or no-noise, and at least two Gigabit Ethernet ports. Naturally, that gives us a wide set of options.

  • ZimaBoard - This is what I selected. I opted for the ZimaBoard 432 due to it’s relatively high PassMark score, 4GB of RAM, 6W TDP, and dual GbE NIC, and low cost (I paid $127.92 at checkout). Additionally, should I desire future upgrades, it does have a PCIe 2.0 4x slot on the side of it. One could very easily just slap an Intel NIC on as an expansion.

Internal Router

What is our Internal Router?

The internal, or core, router serves multiple roles in our network. Not only is it the default gateway for virtually all of our network segments, but it also handles the bulk of our firewall rules, handles internal DHCP, maintains our dynamic DNS, and a variety of monitoring. The core router sits directly in the middle of our network, and one of our main goals is to direct virtually all traffic through it. Running all of these processes can potentially consume a lot of resources (CPU/RAM), which is another reason why we keep it seperate from the duties of the Edge Router.

Hardware Selection

Switching

What do we need in a switch?

Our switch should be a device that is managed, allows 802.1q tagging (VLAN tagging), and link aggregation. Some additional features we might be interested in are DSCP based QoS and LACP (Link Aggregation Control Protocol) support.

Hardware Selection
  • The TP-Link TL-SG1024DE does not support LACP, but does handle a standard LAG fine. For my purposes, this is satisfactory.

Wireless

More details about Wireless later to come.

Hardware Selection

Additional Hardware

As stated above, this section is purely examples of what I have. These devices are not mandatory but are listed for posterity.

Network Attached Storage

I have a whiteboxed Linux system that handles a ZFS RaidZ2 (raid 6) 6x8tb array for me. I’ve got a 4x1GbE Intel NIC in it, and having a 4-port LAG to the switch is a priority. This device doesn’t run any services beyond NFS/SMB, but ZFS is fairly RAM hungry, so it runs an older Ryzen 5 I have.

Hypervisor

Another whiteboxed server, this time a VMWare ESXi host with a small Intel Xeon in it. This is the primary host for many internal services, media services, and lab VMs. It also has a 4x1GbE, but regretably doesn’t support LACP, so we’ll be using a standard LAG here.

NEMS

NEMS is a Raspberry Pi-based Nagios server, and will be a primary monitor for our entire architecture. I like NEMS because it’s compatible with both the Pi and the little 5" TFT screen that I have. This means the server closet has a physical monitor with visible alarms on it, should I happen to be in there breaking things. I could easily just virtualize Nagios on the hypervisor, but I like it being standalone that way it can monitor the hypervisor also.

PiHole

PiHole is a DNS sinkhole that will be blocking ads on our network at the DNS level. Both of our internal name servers will be forwarding requests to the this. This doesn’t have to be a physical device, you could easily just virtualize it. I choose to keep it physical for the same reasons I keep NEMS physical - I don’t want the status of the hypervisor to affect all outbound DNS. Note that you could have two of these for redundancy, or even virtualize one of them. I currently just run a single one.

Secondary Domain Name Server

My primary DNS (NS1) is a virtual machine on the hypervisor. The secondary (NS2) is also on a Raspberry Pi running the Pi-flavor of Ubuntu. Both of these servers run bind9 and the master (NS1) has an allowance to transfer zones to NS2. Both of these could be virtual, but again, redundancy.