Switch and NAS bonding
Setting up the bonding port on the Cisco was straight forward...once I quit using the Cisco GUI to try to configure it.
Switch port bonding
So the Synolgy NAS has two 1Gb network ports. It's possible to configure the NAS with a single port for basic settings (how I've been using mine for years), or you can use the two ports to serve two different networks. Serving two networks this way is handy if you want to provide the full 1Gb speeds to two networks without using a router (which may reduce overall speeds).
Though my switch is only a 1Gb switch, having my NAS support 2x1Gb ports would be nice - it will permit two systems to use the full 1Gb bandwidth. If I ever get a 2.5Gb or 10Gb switch, I would look at upgrading the NAS adding a 10Gb NIC in the PCIe expansion port (a E10G22-T1-Mini
card from Synology). Regardless, my ultimate speed with my workstations are still limited to the speed of each workstation on the network, so a 10Gb (or even a 2.5Gb) NIC would still need clients with equiavalent speed.
I have an older Cisco SG300-10 switch. It's a 10-port 1Gb managed 1 switch released in August 2010 and reached end of support October 2023. Sadly, that means that there may be some features on this that are never going to be fixed. But it's what I have so until I get approval from my lab CFO, I'll make due.
In a past life I've managed Cisco devices. This was 20+ years ago, so it was all command line (CLI) at that time. A few systems had GUI, but those were only useful for monitoring and general health checks. But that WAS 20 yearso ago, so I thought "Hey, Cisco must have matured the GUI and made it feature-compatible with the CLI. Right?"
Well...
The plan
I wanted to use the two LAN ports on my NAS to team up with two ports on the Cisco switch, GigEthernet7 and GigEthernet8. Fairly straightforward and un-complicated...right.
A GUI, sticky mess...
Many on-line documents show that the way to setup port bonding is to use the GUI.
You login to the switch using a web browser, then go to "Port Management", expand the "Link Aggregation", and click on the "LAG Management" menu - LAG stands for "Link AGgregation", and aggregation is a term for "bonding". You then select one of the un-used LAG entries, and press the "Edit..." button.
The switch pops up a window and choose the switch ports to add to the LAG group. You also need to provide the name of the LAG group, and check the box to enable LACP - here LACP stands for Link Aggregation Control Protocol". See this Wikipedia reference on Link aggregation.
After configuring this, the NAS and the switch could not communicate with each other. I moved the Switch to a free port so I could access it again, and the setup bonding on the NAS.
On the NAS, those same on-line documents directed me to go to login to the NAS, open the "Control Panel", then go to the "Network" section, and then choose "Network Interface". From there, it was just a "simple matter of configuring "Link Aggregation Mode" using both NIC ports, choosing "IEE 802.3ad Dynamic Link Aggregation".
And here is where the "easy" configuration went astray. Changing the NIC then made it incompatible with the plain switch port, so I moved the NAS to the two bonded ports in the switch.
The NAS and the switch never synch'd up and a connection was never made.
With the NAS offline, I had to use the "reset" button on the NAS to recover it. The full details are documented on this Synology Knowledge Center document: "How do I reset my Synology NAS? (For DSM 6.2.4 or above)".
After many hours of messing around with this, I gave up and turned my attention to the Cisco SGe00 switch.
CLI for a cleaner view
After logging in to my switch, reviewing some on-line documents from Cisco and other sites, and reacquainting myself with the Cisco command line environment, I got to work.
From the on-line documents, I essentially needed to setup a new Port-channel
interface, then assign the two GigEth ports to it and things should just work. The Cisco switch GUI had done most of the work, but it didn't configure it completely and added a lot of additional "junk" to the configuration of each port.
After a lot of cleanup and a bit of playing I settled this configuration for the Port-channel
:
interface Port-channel1
speed 1000
flowcontrol on
description Synology-NAS
switchport mode access
switchport access vlan 10
This creates the new Port-channel
, channel #1, sets the the port speeds to 1Gb, enable flow control, sets the switchport to access
mode, and (very importantly), enables the port for VLAN 10
(my lab network where the NAS IP address resides). And the description
line just provides a human readable name for the interface.
I then configured the two Gigabit Ethernet ports to use the Port-channel
:
interface gigabitethernet7
description "NAS - DS923+ LAN1"
channel-group 1 mode auto
interface gigabitethernet8
description "NAS - DS923+ LAN2"
channel-group 1 mode auto
Aside from the description
line, this simply tells each GigEthernet port they are part of Port-channel
and the Port-channel1
sets their overall configuration.
The results
Once I got the NAS Link Aggregation setup, and the switch Port Channel setup the bonded 1Gb links came up and started talking.
A nice side-effect of the LACP bonding is the two Gigabit paths are redundant if one fails (cable disconnected, excessive noise, etc). In a more advanced network with the switch hardware supporting it, it would be possible to use two different switches with a Port-channel
setup with one Gigabit link on each to provide redundancy should a switch itself suffer a failure.
My configuration
On the NAS side, there's not much to show:
On the Cisco switch side, the total configuration is within these three sections:
interface gigabitethernet7
description "NAS - DS923+ LAN1"
channel-group 1 mode auto
interface gigabitethernet8
description "NAS - DS923+ LAN2"
channel-group 1 mode auto
interface Port-channel1
speed 1000
flowcontrol on
description Synology-NAS
switchport mode access
switchport access vlan 10
A SSH diversion
Given the age of this switch, it is using older SSH protocols and modern Linux systems don't support the older protocols. For the initial work I had to use the telnet
command line. I want to eventually use Ansible to maintain the switch configuration, and Ansible uses ssh
for its connections, so I had to figure this out.
After some research and testing, I had to adjust two files: the ~/.ssh/config
file, and the ~/.ssh/openssl.cnf
file.
The ~/.ssh/config
file:
Host 192.168.129.2
HostKeyAlgorithms +ssh-rsa
KexAlgorithms +diffie-hellman-group1-sha1
PubkeyAcceptedAlgorithms +ssh-rsa
MACs +hmac-sha1
PubkeyAcceptedKeyTypes +ssh-rsa
PubkeyAcceptedAlgorithms +ssh-rsa
The IP address on the first line is the IP address of my switch. This section tells SSH to use these older protocols when communicating with this system.
The ~/.ssh/openssl.cnf
file:
[openssl_init]
alg_section = evp_properties
[evp_properties]
rh-allow-sha1-signatures = yes
This second file is a less common configuration to set - it is used with SSH to configure additional properties that the SSH system usually doesn't expose via the ~/.ssh/config
file.
To use this file, pass the full file path to SSH via the OPENSSL_CONF
variable like this:
OPENSSL_CONF=/path/to/this/openssl.cnf ssh 192.168.129.2
With all these in place, you'll be greeted with the Cisco login prompt:
$ OPENSSL_CONF=~/.ssh/openssl.cnf ssh 192.168.129.2
User Name:admin
Password:******
switch744144#show version
SW version 1.4.11.5 ( date 08-Apr-2020 time 13:49:34 )
Boot version 1.3.5.06 ( date 21-Jul-2013 time 15:12:10 )
HW version V04
switch744144#
Now to work on the VLAN settings.
-
A managed switch is one that has an interface where specific features of the switch or individual ports can be configured for specific features. ↩