Uncategorized

Testing the speed of GigaBlox, our Tiny Gigabit Switch

Introduction:

We recently shared a post on our Test Rig; a custom mounting that we use to perform network tests on BotBlox products to showcase reliability. The rig itself has mounted Raspberry Pis, an ethernet camera (PoE compatible) and a 7-inch screen for displaying the test results. We mount our switches in the centre and connect the RPis to form a network that we can test. In that chapter, we proved the SwitchBlox performed close to maximum possibly efficiency – something that we are very proud of, given the tiny form factor. We hope to replicate similar high standards with the new test results in this blog for GigaBlox, our new Tiny Gigabit Ethernet Switch. We will also go over several key tweaks to our methodology and the potential implications of them during our test.

While we won’t go over the precise details of the test here (see the first chapter on the Test rig if you want a more in-depth explanation), there are changes to the methodology that we thought were worth making. Specifically, we looked at increasing the MSS size of the Ethernet packets, increasing the TCP window size and running multiple concurrent directions to the iPerf server destination port. 

TCP window size:

When packets are sent between source and destination using the TCP protocol, the destination sends an ACK after it receives the packets from the source to confirm that the packets have arrived in the right order or ask for retransmitted packets should the packets arrive incorrectly. This was designed for reliability at a time when network connections were often unreliable. However, a downside of TCP is the time taken for the ACK to be received by the source, the source is not transferring ‘new’ data until ACK is received. Put another way, the source is sending a constant amount of data and receives only a single ACK for that data, which (in theory) is a limiting factor affecting the rate of data transfer. The number of bytes that the destination port must receive before it sends the ACK is called the TCP window size. The default during the iPerf tests is 64KB by default. Increasing this could mean that we can increase the speed of data transfer. Thankfully, iPerf has options to set the window to a higher size, we have chosen 200KB as an upper limit. It should be noted that this promise of higher speed does come with a caveat if the packets are being transferred on an unreliable network; a larger TCP window could be detrimental to speed as even a small number of missing bytes triggers the data to be sent again. Data that turned out to be corrupted took up a large proportion of the network time than if the TCP window had been smaller.

 

Image taken from here

MTU/MSS payload:

First, let’s examine the differences between the terms MTU, MSS and Ethernet frame. An Ethernet frame’s definition varies depending on which OSI layer used in context. I assume that the most familiar context is the Transport layer definition. In this definition, an Ethernet frame consists of various headers contributed by the OSI layers (Data Link -> Transport), followed by the payload, which is the ‘actual’ data the source wants to send. In this definition, the Physical layer headers are not included. For example, let’s breakdown 1538 total transmitted bytes to see how this works. The first 20 bytes correspond to data added at the Physical layer, which includes a preamble used for indicating the start and timing of the data transmission, followed by an interpacket gap. The next 18 bytes are added at the Data Link layer and include data specifying the MAC address of the source and destination and a checksum. This could be increased to 22 bytes if VLAN tagging is included in the header as well. The next 20 bytes are added by the IP protocol, which includes the IP addresses of the source and destination. Finally, another 20 bytes are for the TCP header, which includes the source and destination port number. 

What remains is the payload size, which contains the actual data transmitted. This payload size is called MSS (Maximum Segment Size) and in this case is 1460 bytes, from the frame size of 1518 bytes (1538 minus the 20 bytes for the Physical layer which aren’t counted). The MTU (Maximum Transfer Unit) is a definition that includes the payload plus the TCP and IP header overheads. In our example, MTU is 1500 bytes. It makes sense to increase the MSS:MTU ratio to allow more data to be transmitted over the network for a constant header size. In our tests, we aim to change the MSS of the frames used in the iPerf test. Specifically, we aim to increase the MSS to 8960 bytes, resulting in the Ethernet frame being dubbed a ‘jumbo’ frame. 

Parallel network connections:

The final new addition to the test methodology is running multiple concurrent network connections to the same destination address and port. Before we were only running one concurrent connection. One pair of source port/address would establish a connection with one pair of destination port/address under the TCP protocol. This one-to-one relationship is defined in what is called a 5-tuple (contains 5 values: source port, source address, destination port, destination address and protocol). A unique 5-tuple defines a unique connection. However, it may be the case that a single connection is not fully taking advantage of the ingress capabilities of the destination port. This test will attempt to increase the number of concurrent connections with the server to 5. 

 

Methodology:

 

Before showing the results, the increase in the complexity of the test methodology presents problems, namely that we don’t want to test too much, at the risk of not adding value. To simplify the test, we have entirely removed the UDP protocol tests. We believe they don’t add more information regarding the performance of the test and that TCP speed and retransmitted bytes are enough tests for all use cases. Furthermore, we were only testing two RPis for each test in the previous method but we soon realised that this was not a realistic scenario that our clients will be facing. In reality, a client may connect several devices to the GigaBlox. Hence, this test will run two iPerf tests between 4 RPis connected on the network to simulate the congestion that customers are likely to subject the GigaBlox to. Lastly, we will explicitly mention the time each iPerf test was run for: 60 seconds, set to a constant for all experiments.

 

TestTCP window size (KB)MSS size (bytes)Parallel concurrent connectionsSpeed of both iPerf tests (Mbits/s)Retransmitted frames in both iPerf tests
16414601911, 941135, 0
26414605919, 9400, 0
36489601906, 92183, 0
420014601909, 9091, 7
56489605941, 9400, 0
620089601940, 9090, 9
720089605936, 9360, 31

 

 

Looking at the results table, each column defines an attribute of the test using the additional characteristics outlined above. The first column is the TCP window size, where the values are either 64KB (default) or 200KB. Next is the MSS size: either 1460 (default) or 8960 bytes. The final column is the number of concurrent connections made to the iPerf server: either 1 (default) or 5. 

 

A glance reveals that the maximum speed achieved appears to be 941Mbit/s, which is very close to the theoretical maximum speed when using the standard MSS. With the jumbo frames we are using, we expect a theoretical maximum efficiency close to 99%. This is an intriguing result that will require further investigation – we suspect that the MTU restrictions on the RPi itself are setting a constraint on Ethernet traffic. The iPerf test might be configured to use jumbo frames but the Ethernet adapter on the RPi might be performing fragmentation to keep the MSS within the constraints of the Ethernet adapter on the RPi. If this is the case, then it means that the GigaBlox is performing very close to the maximum efficiency for the standard MSS – a tremendous achievement for BotBlox. We shall follow up this discrepancy if we manage to find a solution.

 

Capture the test results where GigaBlox achieved 941Mbit/s speed!

 

Also, it seems that increasing the TCP window size did not improve performance noticeably. It could be that there were more retransmitted bytes than the SwitchBlox tests (possibly due to the greater line speed) and this would indicate a slightly less reliable connection than 100Mbit/s ethernet. However, it is noted that performance does seem to increase slightly when more concurrent connections are used. That is most likely because we are more effectively utilising the ingress capabilities of the ethernet chip onboard the GigaBlox at all times. 

 

It is worth noting that the GigaBlox showed more lost packets than the SwitchBlox did. Two potential reasons for this: (1) GigaBlox has a faster line speed so more packets are being transferred therefore it is not a fair comparison and (2) the test we are running is far more stringent, given that we are running two iPerf tests at the same time as opposed to one. Either way, we are immensely impressed with the performance of GigaBlox. For example, in all of the tests we did, we noticed a maximum data transfer of 6.57GB per port pair for a given test (when speeds reached 941Mbit/s). That corresponds to almost data transferring the entire game of Portal 2 in 60 seconds! GigaBlox now has proven its credentials for photography and video applications that require such massive throughput.

About the Author:

Aaron Elijah is a software developer with interests in secure networking and cryptography. He manages BotBlox’s software and firmware development efforts.

Leave a Reply

Your email address will not be published. Required fields are marked *