At this point in the story we have created a widely adopted, relatively cheap network technology called Ethernet. Many providers have created services and infrastructure that use this as the common interconnection technology. The technology has some underlying limitations both inside any given building and once you start communicating between buildings. The contention-based technology gives rise to some undesirable properties under relatively light load. Unfortunately this is a limitation that can not be ‘fixed’. Increasing the available bandwidth will put off the problem but typically involves buying 2, 5 or 10x more bandwidth than you will ever likely use. For many this simple but inelegant solution is good enough. It also perpetuates the myth, rooted in the old Serial link days, that ultimately bandwidth is the problem. So ‘turning up the volume’ becomes turning up the bandwidth. For simple connections that start out as DSL or Cable modem at 1, 2 or 5 Mbps a move to a 10Mbps service and then a 100Mbps service provides massive increase in capacity at relatively manageable cost. At the current time the jump from 100Mbps to 1000Mbps (a Gbps) is a much costlier matter. Ultimately the network will surrender to its basic capabilities and further increases in bandwidth will be impossible or financially prohibitive. At that point the users experience relies on the ability of the application and its associated protocols to deal with increases in latency and possible packet loss. At this point the network punts the performance problem to the software at each end of the connection.