Limitations of Protocols

Interestingly, the same Ethernet technology that generates the bandwidth that all the new applications need to produce Web sites, audio, video, and support massive data transfers work against some applications. In fact if you try and run one of those old terminal (VT100 VT220) applications on most modern networks you will likely run into trouble. The old serial links with little bandwidth but low delay and orderly delivery made terminal/mainframe connections work just fine. Put that same client-server keystroke by keystroke model on a moderately utilized Cable modem or DSL Internet connection and it will likely drive the user nuts. the difference is the quantity and frequency of response and reply between the two stations. In the terminal/mainframe example this is effectively happening keystroke by keystorke. That serial-link style protocol that expected it would be the only traffic on the link just doesn’t play well in a high volume, multipurpose, variable latency network. So the network protocols had to develop to allow competing users and applications going across the same link(s).

In the race to adopt standards and off the shelf technology the concept of protocols to provide basic interoperability across the network. TCP/IP is the most obvious standard that provides the ability for any two stations to send data (that’s that IP part). Then TCP does the work of sorting out the issues with variable latency and any packet loss. Effectively chunking up the data and creating buffers at both ends to allow the protocol to even out the ebb and flow of network traffic that has to compete with many other flows of data. IF the buffers aren’t enough to cover the range of response across a single link or half the planet, then there is an ability to request data be resent. Here is where the start of most common Application performance problems. Since there is no notification when data is lost or delay, a receiving station must make a guess. It does this by having a timer and a buffer size. If three out of four packets of data are received by the buffer it will have to assume that the fourth packet was lost and re-request it. Likewise if it is waiting for traffic and none is received, it will wait a specified time and then either request the data again or abandon the connection. So effectively protocols expect traffic to arrive, have basic tools to handle delay and lost but if the variability exceed their tolerance then everything comes to a halt in a hurry.

Each protocol, whether TCP or higher level protocols such as SMB, ftp, or http will have their own mechanisms to request, retrieve and reconstruct data flows. Those protocols have inherent assumptions and operating processes that are designed to improve reliability. Those same process also put a limit on how fast a given amount of data can go from A to B. For TCP transactions the theoretical limit is Maximum Possible Transfer Rate = TCP Window Size/RTT This limit is regardless of available bandwidth and assumes the network is otherwise performing perfectly. This also is per-transaction so in the case of multiple users (or multiple TCP transactions by the same user) you could expect the maximum utilization to be this Max transfer rate x number of transactions or users. Again this assumes a network performing perfectly, as the latency (RTT) rises the throughput drops and if packet loss occurs all bets are off. This is also independent of the stations hardware (NIC CPU and Disk) which like the network will introduce there own limitations on end to end performance.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.