The Next Five Years of Bandwidth

[NOTE: This essay was commissioned by a client in December 2006. It’s the second in a series of old-yet-relevant position-papers whose exclusivity has expired, that I’m editing and posting. Things for the next five look “similar”, yet scaled up in some areas. There is no formal “conclusion”, as this is one section of a larger piece.]

Over the next five years, datacenter bandwidth will level off for a bit. With the 10GigE standard behind us we can finally pull our backbones up to a level where they’ll be able to breathe easier for a while. Storage speeds are still being gated by the storage devices themselves, and until either solid-state media becomes cost effective or disks rotate twice as fast as they are now, that isn’t going to change much. Aggregating virtual systems is actually causing an interesting bandwidth phenomena that I’ll address later. Regardless, a 10Gig, or Nx1Gig backbone should be able to breathe well for the next half-decade. Planned year-over-year demand increases of 5-7% should be expected.

Desktop network speeds have been about the same for the last five years, and will largely remain unchanged. A 32-bit computer system running a commercial desktop operating systems has too many architectural limitations, still, to be make use of more than 60-85Mb/s of bandwidth. While some vendors are running 64bit processors, they generally are using bus architectures that aren’t that wide, thus gating peripheral speeds back to 32bit. In the next five years that will clean up a bit, and 64bit “extensions” to the 32bit processors will become more common place, but still not impacting the network noticeably due largely to OS and bus architectural issues.

Environments consolidating onto virtualized systems are seeing an interesting gross decrease in datacenter network bandwidth use. Not surprisingly, they’re also seeing peak utilization well above what they had prior to consolidation. The latter is easily explained by virtualized systems generally “netbooting” their OS from the storage network or a bootserver, and now more than ever embracing networked storage completely. The gross decrease has been unexpected because of the higher demands on the network, but is explained by architectural constraints. We’re now seeing 10-15 virtual servers sharing one or two network connections, where previously each had one or two of their own. This has somewhat of a levelling effect on network use, but isn’t dramatically impacting service performance as one would expect. The network is more important in these environments, but as a whole not as taxed.

It was largely believed that mobile “broadband” availability and use would be much higher by now, but we have yet to see a real platform for use. The Palm Treo series is getting an overhaul “soon” and rumored platforms by Google and Apple may change that landscape. In general, even if fully realized, the network demands by these users will largely have no impact on the greater network, or on datacenter network needs. The next-generation, “4G”, will be changing that, but I don’t expect to see that kind of horsepower in a phone until late-2010-to-2012: the processors are still just too slow.

What will change dramatically will be the bandwidth access for remote users. While not directly impacting the datacenter we’re going to see dramatic growth in the cable/DSL/satellite “broadband” space. Internet-facing applications may see a 20-30% rise in client demands as users become less tolerant of waiting for application loads due to their expectations of “faster” service, on the order of 200-250% more bandwidth. It is expected that OSP asymmetrical provisioning will continue.

This entry was posted in Architecture, Opinions, Work. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *