Prior to Ethernet, services like T1s had uniform service properties and a finite capacity. So the limitation of the network was how many bits it could carry (bandwidth) not how well it could carry it. The network would cease to operate when the maximum bandwidth was reached. Unfortunately this model of network performance still pervades the ‘common knowledge’. On most Ethernet based network services the service properties are not uniform. An Ethernet based service with more than 2 end points will perform consistently up to about 30% of the interface speed (say 30Mbps of a 100Mbps service) between 30 and 60% the properties will start to degrade fairly gracefully. As contention for communication rises the chance that one end needs to wait for another to finish increases exponentially (like more billiard balls rolling around on a pool table). In the old days this was called a collision, as network devices got smarter collisions went away and buffers and queues became the delay point. Either way a small but measurable increase in the average time (latency) it takes for a bit of information to go from one end to the other occurs. Measured in thousands of a second (ms) it starts to affect the end to end performance of the network and the services being used over the network.
Even more significant for some services the Latency (time) varies from one bit of data to the next leading to a range (min/ave/max) in the time to transmit data. All three will rise and the range (min to max) will increase as the network loads up. Somewhere over 60 to 70, depending on the capabilities of the workstations and network devices, the collisions or the filling of the buffers or queues will start to cause packet loss. The overall performance of the service will degrade rapidly and exponentially to about 80% + where it effectively becomes unusable. It is important to note that in only the most limited cases will you ever get an Ethernet connection operating at the speed of the interface (ie. 100Mbps) with the theoretical max being 80-90Mbps. This is a fundamental limitation of the protocol and the contention model it is based on. While a congested (high utilization) network is easy to measure and deal with the effects of the other properties can be much more difficult to determine. In the end the network link will need to be measured by available bandwidth, average latency, and packet loss.