At the last post I argued that come 2014, there will not be enough spectrum and enough base stations to serve the wireless data demand even with LTE with typical macro-cell deployment models. In the same post, I highlighted LTE femtocells and WiFi as the choices to overcome this capacity crunch. I also argued why I consider WiFi as the essential offloading technology even in networks where femtocells are deployed.
Now let’s look at the distribution of that traffic growth. Using Cisco’s Mobile Data Visual Networking Index figures, we can estimate that 2/3 of the total traffic will be for video. Considering many other industry pundits cite figures around 70% and knowing the existence of traffic measurements from operator networks that already show 50-60% video traffic (even without a significant penetration of netbooks and tablets), I believe 2/3 figure is a reasonable number.
In this brave new world of 2014, wireless service providers must find methods to offload the video traffic. However, before delving into what the best strategy would be, let’s look at how the same problem was treated in the wired Internet.
In a very well quoted white paper (http://www.drpeering.net/white-papers/Video-Internet-The-Next-Wave-Of-Massive-Disruption-To-The-U.S.-Peering-Ecosystem.html) written and published in (multiple editions) 2007-2008, William Norton (back then from Equinix, now with Dr. Peering) described four models to deal with the rapidly growing popularity of YouTube and other video content sites as of 2007. He compared each of these models for their delivery cost per video for three different (low, medium, high) traffic loads:
- Model 1 – Simple Commodity Transit: This was the original YouTube model before they were properly integrated into Google. They relied on lowering Internet transit traffic costs as much as they could.
- Model 2 – Content Delivery Network (CDN): This is the model Akamai has made famous starting around 1999-2000. Since then Limelight has also built a significant CDN business. Even Google relied on Akamai in some cases to supplement its transit and server capacity. More recently other cloud providers such as Amazon, EMC, MSN and certainly Google went into CDN provider business.
- Model 3 – Hybrid Transit/Peering/DIY CDN: This is the model Google eventually adopted by building massive data centers around the world, replicating YouTube content, building Akamai like features to direct its customers to its nearest data center.
- Model 4 – Peer-to-Peer Networking: This is the model BitTorrent has been trying to popularize for a long time. Even Akamai has made a P2P acquisition (Redswoosh) back in 2007; but quietly removed all their web presence and products. There are some feeble attempts from Akamai on the use of P2P, but nothing significant. Similarly BitTorrent hasn’t managed to remove that bad aura associated with the use of their software for illegal file copying, high publicity music industry lawsuits, etc.
Looking back to the recent history of the wired world, models 2 and 3 ended up being adopted. Guess what model William Norton was predicting as the cheapest! Not the good old Internet transit model, nor the CDN. He predicted that P2P model would be the cheapest, and by far the cheapest! He estimated that the cost for P2P delivery model would be less than 1% of the cost for other models.
Certainly there must have been a mistake in Norton’s calculations! Actually the mistake was the assumption that last-mile, aka access networks would provide this zero-cost transit capacity. In reality, primarily the lack of proper controls for content owners to regulate content distribution as well as other technological factors such as asymmetry of access link bandwidth, reliability of P2P donor devices, security risks all contributed to the demise of P2P for content distribution.
Let’s fast forward it (back-to-the-future) to tomorrow’s video traffic explosion in mobile networks: As I explained in the previous blog, the expected limitation is in the access network (last-mile for the content distribution community.) Unlike the wired world where DWDM continued to provide additional bandwidth, wireless access network capacity is restricted by the available and suitable spectrum (60 GHz doesn’t help). Therefore, throwing bandwidth at the problem using a macro-cell deployment model, at the traffic levels that are projected for 2014 and beyond, is simply not feasible.
Since WiFi is already heavily deployed as a home and office-premises networking technology, it will be that missing (wireless) access network technology providing the P2P content distribution capability. With gradual replacement of legacy WiFi APs with 802.11n, system capacity of WiFi will increase dramatically (roughly 100 times of 802.11b). This will make WiFi as the technology choice for P2P content distribution mechanism to offload video content from the macro LTE networks.
In this near-future brave new world, earlier problems associated with P2P, such as illegal copying, security risks, unreliability of donor devices, etc. need proper fixes. That means technology solutions using P2P over WiFi must be end-to-end involving clients, devices, WiFi APs, storage and network policy controls. In the next post, we will look at the details of components of such a solution.