iCloud: Good for Pictures but not so for Video

Apple’s announcement of iCloud was long anticipated but like any Apple news it caused a lot of discussion within the industry. What Apple is planing to deploy not new nor novel. Many companies have been providing similar services for a long time. One of the early pioneers was T-Mobile USA that introduced a peer-to-peer picture sharing application almost a decade ago. At that time it was rather obvious that synchronization capabilities of feature phones (this is preceding the first iPhone by 5 years) was nowhere near to be useful or ubiquitous enough. Similarly devices lacked a WiFi interface. Cost of GPRS bearer (used to deliver those low resolution pictures taken on sub mega-pixel cameras) was exorbitant to allow a user to send the same picture to multiple friends across different times. Instead a network storage (cloud wasn’t a popular term back then) was devised for picture sharing. T-Mobile USA insisted on its handset vendors to support so called “Picture Blogger” service and the result was a varying level of success. Similar to what Apple is attempting, T-Mobile didn’t charge for the service; instead it relied on the level of stickiness or social-networking impact it generated.

Fast-forward to 2011, world is a very different place. 13 Kbit/s GPRS networks have been replaced by 42 Mbit/s HSPA+ capabilities. Instead of a sub mega-pixel camera typical smartphone features a 5+ Mpixel camera. Almost always the same phone has WiFi radio. Furthermore in the last 10 years users have added many devices to their portfolios: MP3/MPEG devices to consume music and casual picture/video viewing, tablets for primarily accessing content using a web-browser, traditional PC/Mac for both accessing and generating/editing content, cameras for generating content.

As always Apple does what it does best: improves upon what others attempted and failed or faced limited success. They take the concept of store-forward architecture to make synchronization and back-up seamless. Total control of the user interface across all Apple devices give them the ability to make something pretty simple and straight-forward to look as if “magic”.

All of this doesn’t come free for Apple. They need to spend capital and operational costs for storage, computational power as well as bandwidth to Internet. Apple’s new data center in North Carolina was designed to store Petabytes of digital media for iCloud users. Apple announced that it will give each user 5 GB of storage for free while charging for additional storage. Just as a comparison, Amazon Cloud service charges $0.14 per GB for the first TB of usage. In other words, if Apple achieves the same level of efficiency of AWS S3 service, 5GB of storage per user would cost less than 3 quarters. Apple provided content such as music, books do not count against this quota since such content is served from a central repository as opposed to being stored individually per user.

The real Achilles’ heel of the iCloud strategy seems to be the bandwidth Apple needs to provide within and primarily in and out of its data center. Let’s try to analyze why:

  • Unlike the storage quota, Apple didn’t put any restrictions on the amount of traffic a user can generate to and from its iCloud storage. We believe this is primarily to make the service more appealing and easier to explain to the consumer as opposed to putting meters, throttles and caps as wireless and even some wired service providers do.
  • Initially the only serious traffic generator will be pictures. Since the amount of user generated content has no limits, apart from local device storage and possibly bandwidth for Internet access, every taken picture may end up being uploaded to the cloud and downloaded to multiple devices. Today picture sharing, storage sites such as Picasa, Flickr, even Facebook provide very cumbersome methods. Level of integration is limited; even with apps for these services on smartphones “seamlessness” is not there. That is why even when Facebook boasting a rate of 1000 picture uploads per second translates into about 5 pictures per user per month. We expect with iCloud this will change dramatically. Number of pictures taken and shared with other devices will be in the order of 100s per month for average user whereas power users may synchronize 1000s per month.
  • Assuming a monthly picture upload traffic of 1000, an average picture size of 1.5 MB and an average number of 2 devices to be synchronized, total monthly traffic per user may reach 3 GB.
  • We estimate Apple needs approximately 50 Kb/s bandwidth per user to effectively serve 3 GB/month synchronization traffic. In other words, Apple must have at least a pair (for redundancy) of 50 Gb/s pipes for every 1 Million iCloud user. Even with very competitive pricing (around $1 per Mb/s per month), this would require to spend $100K per month per 1 Million users.
  • Considering the fact that iCloud is free and will be available for every iPhone, iPad, Mac, it is conceivable for Apple to reach 100 Million user mark within a couple of years. In our estimation that would require Apple to spend roughly $120M per year just for Internet pipes of approximately 5 Tb/s.

This cost and infrastructure scale requirement are a likely reason why Apple didn’t extend the iCloud service to video. If we consider that iPhone supports HD (720p), even 20-25 minutes of video per day may result in excess of 500 MB. Certainly that would change the scale requirements dramatically, requiring at least one order of magnitude bigger pipes compared to iCloud service limited to picture synchronization.

A better strategy for Apple could be to use its other assets, especially at home such as Airport/Time Capsule to optimize the path between the devices and the storage location. It looks as if iCloud is an excellent solution for iTunes content and an acceptable solution for user-generated low-density media such as pictures, files, messages, contacts, etc. However, for heavy-density media such as video Apple needs a complementary solution. iCloud as it is cannot deliver it at the quality and price level needed.

     

    Posted in Uncategorized | Leave a comment

    Cost of Adding Network Capacity: More Spectrum or New Sites? Could There Be Other Alternatives?

    There is an ongoing debate in the wireless industry with respect to where the capacity is going to come from. Historically, capacity was added in three dimensions:

    1. Improvements in spectrum efficiency
    2. Increasing available spectrum
    3. Increasing the number of cell sites

    Martin Cooper has often been credited as one of the pioneer’s of cellular communication. He was the leader of the team that built the first Motorola cellular phone back in 1984. He is also credited with postulating what’s called the Cooper’s Law that specifies that every 2.5 years or so total wireless capacity in a given geographical area doubles. Based on Cooper’s calculations worldwide wireless capacity had increased 1 Trillion (10¹²) since 1901 when Marconi had his first Trans-Atlantic transmission. Cooper thinks in the last 45 years, capacity increase was approximately 1 Million times. He attributes the majority of the increase to the increase in spatial re-use of the available spectrum, i.e., reducing the cell-sizes and increasing the re-use factor. Following diagram shows the distribution of the factor of increase.

    Spectrum efficiency increases are relatively costly. As we argued on an earlier post, ultimately such increase is bounded by Shannon’s law. Multiple techniques including more efficient modulation, coding, error correction, compression, multiplexing techniques all translate into increasing the Signal to Noise Ratio (SNR) and bringing it closer to the Shannon limit. Following figure is from 3GPP TR 36.942 that shows the relationship between various modulation and coding combinations with respect to the Shannon limit.

    16 QAM and 64 QAM are modulation schemes that are used in HSPA+, LTE and WiMax to increase the channel capacity. Furthermore, all of these systems use adapt the modulation and coding scheme based on the channel characteristics. The result is the following graph from the same 3GPP document that shows the possible spectrum efficiency of a Single Input Single Output (SISO) system which can be approximated as 3/4 of the Shannon limit.

    In other words, the highest modulation/coding scheme (in the arsenal of current 3.5/4G systems) is limited to about 4.8 bit/s/Hz when the SNR exceeds 19 dB. In order to achieve the peak speeds exceeding 1 Gb/s for 4G, systems designers must rely on the concept of Multiple Input Multiple Output (MIMO) along with bonding multiple already-wide channels together to use channels as wide as 100 MHz.

    A recent report commissioned by Ofcom (UK regulator for the communications industry) to Real Wireless paints a very detailed picture of the impact of technology (so called spectrum efficiency) advances in the overall capacity. It is a fascinating read of 120 pages of diagrams, charts as well as 95 pages of appendices detailing the survey and research methodology behind it. I would recommend everyone who is interested in the future of the wireless technology to read it or at least go over their major findings. The report was written for the UK market and its timelines reflect its market realities: LTE spectrum auctions haven’t been conducted yet, likely LTE deployments will be in 2012 and only in 2013 some volume of LTE terminals will be in the UK market. Nevertheless, the report includes a valuable compilation of observations that are valid across the world:

    1. Spectrum efficiency improvements due to LTE and LTE-Advanced will not be able to keep up with the projected traffic demand. By the end of the decade spectrum efficiency is expected to grow roughly 5.5 times which translates into a yearly average growth of 18.5%.
    2. Using a more pessimistic assumption based on the traffic profile mix and the needed head-room for real network deployments as opposed to full loading the actual gain may be even smaller (as low as 3 times the capacity of 2010).
    3. Dense urban environment deployments cannot be served by the combination of spectrum efficiency and available spectrum increases. Increasing the number of cell sites in the form of small cells is the only option.
    4. Cost of deploying these small cells is not well known. Since the demand is directly impacted by the availability of the capacity, it is not known how the unit cost of providing service due to increases in cell sites would translate into impacting the demand itself.
    5. Primary factor for the increase in spectrum efficiency is the use of MIMO which happens to be more effective in in-door environments and where the effective cell size happens to be small. On the other hand deploying multi-antenna base stations is a significant challenge in in-door, small cell deployments. For example, report predicts that even in 2020 only 5% of the UK sites will have 8 antennas whereas 50% will have 4 antennas and 45% will be with dual antennas.

    Coming back to our title, the fundamental decision a mobile network executive will be making is how to balance the investment between the additional spectrum and the new cell sites. We believe when it comes to adding new spectrum there are two complementary paths:

    1. Invest money in licensed spectrum: Obvious strategies are purchases of new available spectrum, mergers or acquisitions of assets of competitors.
    2. Invest money in off-loading using unlicensed spectrum: Increase the level of traffic off-load strategies using WiFi.

    Licensed spectrum prices varies significantly from country to country. Nevertheless in every country underlying primary metric is Currency/MHz-POP where MHz-POP is the population-weighted spectrum. Some recent transactions to establish pricing for licensed spectrum are:

    1. AT&T’s purchase of T-Mobile for $39B. If we assume AT&T is purchasing T-Mobile for its spectrum only, then the price per MHz-POP comes to $2.70. Compared to 700 MHz auctions where the per MHz-POP price was closer to $1.40 (for medium and large size markets), AT&T paid almost double. However, considering the infrastructure and people who will join AT&T as part of the transaction as well as 33M subscribers who will be joining the customer base of AT&T, it doesn’t look like AT&T deviated much from the 700 MHz auction pricing.
    2. Recent estimate from French Ministry is 1.8B Euro ($2.55B) for 30+30 MHz in 800 MHz band. Considering French population is roughly 63M, spectrum price comes down to around $0.68/MHz-POP. This is significantly lower than US prices and it may be reflecting the more cautious attitude of mainland Europe towards growth in mobile data services, especially after the exuberance of 3G spectrum auctions a decade ago.

    For unlicensed spectrum cost model is entirely dependent on the network implementation cost which is comparable to small cells of licensed spectrum, especially in very dense in-door environments where providing the necessary capacity is not possible with traditional dense-urban macro-cell or micro-cell deployments.

    For example, Real Wireless/Ofcom report identifies King’s Cross train station in London as a case study. Primarily due to the customer density (over 1 Million people per square-km) the expected traffic load in King’s Cross is expected to be over 27 times of typical dense-urban deployment. Considering the station is roughly 8000 square-meters, it can be covered by deploying 6-7 pico/femto cells (single-sector) with Inter-Site Distance of 60 meters. Similar area can be covered with a very dense 802.11n (for the sake of apples-to-apples comparison with 4G systems relying on MIMO and very large bands) deployment using 5 GHz spectrum. Following experiences of companies such as Xirrus, Meru, Ruckus, it seems to be possible to reach comparable traffic densities using WiFi if AP density is at least one order of magnitude higher.

    As noted in the Real Wireless/Ofcom report, the first order of business is to calculate the true cost of dense deployments of licensed spectrum technology along with the projected cost of spectrum to calculate the unit cost for wireless data services. Only this way, we will be able to understand if the industry is projected to have significant bottlenecks in addressing user demand in hot spot areas. Certainly such analysis is independent of the natural progress of 4G deployments where a large portion of the coverage area will continue to be successfully covered by macro or micro sites.

    Next natural step would be to compare the unit cost for hotspot locations against the cost for the use of unlicensed technology (WiFi). Last 8-10 years have shown that WiFi is the superior choice even with the very limited 2.4 GHz band. We estimate 30-40 % of wireless traffic is already carried over WiFi in developed world where the wired infrastructure is more readily available. Majority of this offload is at home and this traffic grows to be a larger segment of overall traffic. Real Wireless/Ofcom report predicts at-home use will reach 58% of total traffic by 2013. When the office use is added, the total opportunity goes up to 85% of overall traffic.

    In summary, choice for the wireless network executive is not a simple bifurcation between spectrum and additional cell-sites. Instead a multi-pronged approach is the advisable path:

    1. Deploy the technology advances (spectrum efficiency)
    2. Make spectrum purchases to plan for traditional macro, micro cell deployments for dense urban, urban and rural coverage
    3. Identify hotspots (those 3-4% of sites that will carry 30-40% of total network traffic) and find ways to use dense WiFi deployments to off-load traffic
    4. Work with device manufacturers to promote the adoption of higher orders of MIMO for 802.11n and the use of 5 GHz band
    5. Deploy core network technologies such as IP Flow Mobility (IFOM) to promote the use of WiFi at home and office to off-load traffic from the radio access network while keeping a close tab on the service not to be by-passed by over-the-top service providers
    Posted in Uncategorized | Leave a comment

    Another Outage / More Lessons: Geo-redundancy Couldn't Prevent Verizon LTE Service Disruption

    After the previous week’s major outage at Amazon, this last week we witnessed Verizon to suffer an outage on its new LTE network. Starting on Tuesday April 26th, customers with HTC Thunderbolt smartphones, USB sticks and portable hotspots have noticed not being able to register with the entire network across the USA. Outage was acknowledged by Verizon on Wednesday morning. By Wednesday afternoon, Verizon reported that it had found the root cause and planning to restore service market-by-market. By Thursday morning Verizon announced the restoration of service with a tweet.Throughout the outage period customers expressed frustration with not being able to connect to EV-DO network either. This was primarily due to faulty modem settings in Thunderbolt and some of the modems. Considering there are about 500-600 K customers with LTE devices, overall outage impact was fairly small for Verizon.

    On last Thursday, Simon Leopold, analyst at private investment firm Morgan Keegan released a research note pointing to the NSN Home Subscriber System (HSS) as the root cause of the problem. Network-wide nature of the problem and the long outage period gave credence to the claim. However, as of Monday, May 2nd, there is no confirmation of the root cause from Verizon.

    Verizon revealed their IMS core vendor choices at the IMS World Forum in Barcelona, Spain in mid-April. These are:

    • NSN Home Subscriber System (HSS)
    • Tekelec/Camiant Policy Control Resource Function (PCRF)
    • Acme Packet Session Border Controller (SBC)
    • Alcatel Lucent Call Session Control Function (CSCF)

    Context of the Verizon presentation at the IMS World Forum was Voice over LTE (VOLTE) implementation planning. However, HSS and PCRF are already at use for LTE customers providing subscriber registration/authentication (HSS) and management of service/traffic handling policy (PCRF) capabilities.

    Following diagram from an Alcatel-Lucent LTE poster describes the interworking between CDMA2000 EV-DO network Verizon operates and the newly deployed LTE network. Instead of going into great details of the release 8 standards, we can point out that Packet Data Network (PDN)-Gateway is the anchor point for the mobility between EV-DO and LTE networks. Rather than relying on client based mobile IP, 3GPP release 8 standards rely on Proxy Mobile IP which is implemented both in the Serving-Gateway (S-GW) and the Access Network Packet Data Serving Node (PDSN). Verizon has a dual-stack implementation that provides both IPv6 and IPv4 addresses when the device is connected to LTE. When it uses EV-DO, the device gets an IPv4 address. Therefore, true network mobility is currently supported for IPv4 only.

    Based on the symptoms of the outage, it appears more likely that Verizon suffered an HSS outage. In the IMS World Forum, Verizon reported that it has built IMS core over two geographically separate data centers. (As you’d expect from a telco company with a long pedigree of reliability standardization unlike those who suffered through the Amazon outage helplessly during the previous week.) However, it appears that even a geo-redundant network wasn’t sufficient to prevent the outage. We believe Verizon was operating its HSS in an active-active fashion that required a very tight synchronization between the two database instances. There are two categories of service impacts such a system may suffer from:

    1. A network failure or a process failure that blocks the synchronization between the two active databases.
    2. A network, software or hardware failure that renders one of the active systems unable to process incoming requests from network elements.

    In the first scenario, the system rapidly moves into a state where the (geographical) high availability capability is lost. However, this scenario may not be catastrophic if the operator can manage/regulate the incoming load (until the next maintenance window) and then execute an emergency replication between different databases.

    In the second scenario, system should have been designed such that HSS in one geographical location can handle the entire system-wide load. Unfortunately this is easier said than done since transactional systems like HSS may have very high transient loads compared to the steady state traffic. To better explain, imagine if half the LTE devices in the Verizon network were attempting to register with the alternative HSS in a very short time (in a few minutes). Certainly all of a sudden the service rate for so many requests may be many 1000s of requests per second. It is quite possible that HSS might have suffered such a temporary/transient problem that affected the existing normally operating LTE devices as well. In such a scenario, in a very short time, the network may move into a state where every device constantly tries to register with the network whereas network cannot handle the total offered load at once. Verizon’s explanation of restoring service market by market makes us think that what we listed above is a plausible scenario for last week’s outage.

    One way to reduce the likelihood of such outages would be to make the HSS more distributed. This has to be balanced against the desire to make provisioning more manageable. Nevertheless, it might be possible to develop a network design where HSS database as well as its components serving signaling requests coming from network elements such as Mobility Management Entity (MME) are further distributed. Another constraint for this would be the propagation delay that would ultimately affect the latency of the synchronization traffic.

    Another strategy would be the use of tight operational procedures to shut down network in case of similar high transient load induced instabilities. Certainly this would not eliminate the outage but it will help to reduce the impact and duration of outages. We suspect Verizon wasn’t expecting to see a 24-hour impact on its network.

    Moving to all IP networking with LTE is a revolutionary change that impacts the entire network including radio, transport and core. Even though all-IP networking provides a flatter network architecture, there are still hierarchical nodes in the mobile network such as HSS, PCRF or Charging Platforms. Although they are not single points of failure, they have a much bigger impact on the network service levels compared to the access network. Therefore, they need to be designed with a different level of care and attention to detail. We believe this outage was a big learning experience for Verizon. They must have conducted their root cause analysis and they will make the necessary corrections. Now the remaining question is if they will follow in the footsteps of Amazon to refund their customers for loss of service.

    Posted in Uncategorized | 3 Comments

    Lessons of Amazon Cloud Outage

    I happen to be one of the luckier website owners using AWS. Even though my site was hosted in AWS-East (in Virginia) where a major outage knocked many sites including Reddit, FourSquare, BigDoor among others, somehow my availability zone (us-east-1d) was recovered earlier than other availability zones in AWS-East. Yesterday evening (April 23rd) at 7 pm PDT, AWS finally reported that Elastic Compute Cloud (EC2) services were working normally over all access zones within their Virginia data center. Considering it took almost 3 days to get to this point, and there are still problems with restoring specific instances for various customers, it is quite natural to understand the very heated discussion going on in technology blogs about the viability of cloud services.

    Amazon has built 5 AWS data centers so far. These are in :

    • Ashburn, VA
    • Palo Alto, CA
    • Dublin, Ireland
    • Singapore
    • Tokyo, Japan

    In each of these locations, Amazon built multiple independent rooms/halls and they called them as availability zones. For example, in Ashburn location there are 4 distinct availability zones. Similarly Palo Alto and Dublin data centers have three availability zones while Singapore and Tokyo offer two. Amazon claimed that availability zones are independent of each other in terms of physical infrastructure. Following excerpt is from the AWS EC2 FAQ.

    “Each availability zone runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Common points of failures like generators and cooling equipment are not shared across Availability Zones. Additionally, they are physically separate, such that even extremely uncommon disasters such as fires, tornados or flooding would only affect a single Availability Zone.”

    Until Thursday April 21st 2011, AWS has not suffered an outage impacting multiple availability zones at once. All outages they have had until the most recent one were limited to a single availability zone. For such outages, AWS recommends the use of load balancers within a data center that will allow traffic to be redirected to different availability zones.

    Latest AWS outage has shown that Amazon’s assumptions about ” fate-sharing” were not entirely true. Based on the extensive amount of information on techno-blogs, it is apparent that the original network failure in one of the availability zones triggered the replication of disk data across other zones in Ashburn as well as more traffic and server instances in those zones. This resulted in a cascaded outage that impacted all availability zones in Ashburn for almost 12 hours on Thursday. Eventually Amazon managed to recover three out of four availability zones and since then the impact of the outage has been limited to zone B in Ashburn.

    As in the case of any such event, many people came out with very strong words on the impact of this outage on the fate of cloud computing. Ultimately the analysis boils down to these two extremes:

    • “Sky is falling” camp: Cloud-computing is inherently more complex than necessary. All those features that allow sharing, dynamic resource allocation, virtualization come at a great cost. This unnecessary complexity is the soft-belly of cloud-computing.
    • “Sky is bluer than ever” camp: Cloud-computing provides scale, elasticity and (yes!) reliability much better compared to standard model of building/hosting equipment. Design your application and system accordingly, e.g., use geo-redundancy, build your application for failures, and if you are paranoid enough use not one but multiple clouds.

    The biggest fallacy of the pessimist view is its inability to provide the necessary flexibility in resource allocation. If a systems architect wants to design a fully redundant service over two geographically diverse locations, she needs to provide 250% of the peak capacity assuming that the engineering limit is 80% of the capacity of resources at any given site. Considering many web/mobile services have very unpredictable growth curves, building 250% peak capacity from day-one is a disastrous decision. That is why many data centers today are full of rows of racks of very lightly used equipment. Certainly it is possible to increase the number of locations for equipment to reduce the total peak capacity that needs to be deployed. Following graph shows a scenario where 100 different applications of the same load (1 Unit Load) is deployed over multiple sites. The graph clearly depicts that when the redundancy scheme is simpler, e.g., 1+1, the amount of resources necessary to support these applications is much higher. This graph assumes resources are dynamically shared among multiple applications per the principles of cloud computing. If such dynamic sharing isn’t available, total resources will be independent of the number of sites. In that case if every application uses a 1+1 redundancy scheme, irrespective of the number of available sites, one would end up building 250% capacity for every application.

    Interestingly in the dynamic resource allocation model, by simply increasing the universe of possible locations, it is possible to reduce the total amount of resources. This is due to the fact that with a larger pool of sites to deploy these applications, the impact of any single site failure is significantly reduced. Therefore, the sensible approach would be to keep the level of redundancy small, e.g., 1+1 or 2+1 but spread the applications to as many possible non-fate sharing sites as possible. During last week’s outage, companies such as Netflix that have adopted this model came out unaffected. On the other hand, those who relied on the intra-data center redundancy had a much bigger impact. Certainly Amazon didn’t make life any simpler for those who believed Amazon’s statements about the diversity of availability zones within a data center.

    I expect Amazon as well as the overall cloud industry will come out of this ordeal even stronger than before. It is reasonable to expect Amazon to come up with better explanation on why/how failures impacting multiple availability zones occur. However, instead of going back to owned/hosted equipment, more systems architects will realize the details of AWS architecture, especially its use of Elastic Block Storage and how it can be adopted/combined with AWS Simple Storage Service (S3) as well as how multi-region redundancy can be built. I am sure those who can justify will adopt their applications to use multiple cloud providers along with global load balancing solutions. Ultimately the real winner for cloud is its elasticity. That benefit alone is enough to convince technology leaders to adopt cloud services more forcefully. Unless a company is willing to invest in infrastructure as a technology/cost/time-to-market differentiator for its business, then cloud service is the only viable solution. In other words, if you are not ready to be the next Google, don’t bother with it.

    You can find a useful article about the outage and lessons is at this blog.

    Posted in Uncategorized | 1 Comment

    Migrating to a Content Management System Running over Cloud Services

    After having a website for over 6 months I have decided to do two migrations I was hoping to do for some time:

    • Migrating to a self-hosted WordPress implementation
    • Migrating to a cloud service

    There are significant benefits for running my own WordPress instance:

    • I now have the ability to combine my website with my blog, no more two separate identities
    • I can monitor the details of traffic profile targeting my website and my blog using Google Analytics
    • I can add multiple plug-ins that are available as opposed to what is being selected by WordPress

    Similarly running the WordPress instance over a cloud service such as Amazon Web Services (AWS) brings significant benefits:

    • Standardized set of tools to manage the services
    • Ability to scale as demand grows
    • Low price of entry, especially to start with it is pretty much free

    Innovation in web technologies in the last 17-18 years has been spectacular. Today anyone with some small amount of aptitude in technology can design and deploy a decent looking website in an afternoon without spending any money. Better yet she can manage and monitor its performance, develop campaigns, monitor take rates and change content based on the adoption and feedback from users. I am not sure if it exists but a Moore’s Law type postulate about the cost for implementing web services must be showing a very rapid price/functionality depreciation in the last 17+ years since the introduction of Netscape. As a matter of fact since we are already at “zero-cost” for amazing level of functionality, it is hard to imagine what would be next.

    Obviously biggest reasons for this spectacular improvement in price/performance are:

    • Major improvements in computing platform performance that allows more complex software to be developed and maintained
    • Orders of magnitude higher bandwidth and storage that allows and encourages sharing programming tools and environments more easily
    • Success of open software that has equalized the world for those who don’t have the means to pay for rich capability sets

    Thanks to this success, especially since the advent of smartphones the concept of “over-the-top” defines a significant portion of innovation within the wireless marketplace. That’s why we have companies like FourSquare, Twitter, Facebook innovating on top of the expanding wireless networks as opposed to mobile operators. Dominance of over-the-top is primarily due to the brute-force of having so many different companies chasing to deliver very similar functionality. On the other hand constraints such as limited spectrum resources, high capital requirements for large geographical coverage, need for retail presence, customer support expectations limit the level of competition in wireless service provider field. With more consolidation being planned, wireless services industry seems to be heading for:

    • Operators vacating the services field to over-the-top providers (to become dreaded pipes)
    • Terminal device manufacturers trying to substitute over-the-top providers (strategies such as Apple’s MobileMe, HTC’s Sense, Motorola’s Blur, etc.)
    • Over-the-top providers trying to get into terminal device business (so far unsuccessfully, such as Google with Nexus)

    Please visit us at the updated website: http://www.wirelesse2e.com. Copies of old articles can be found on the website along with our future posts.

    Posted in Uncategorized | Leave a comment

    Migrating to a Content Management System Running over Cloud Services

    After having a website for over 6 months I have decided to do two migrations I was hoping to do for some time:

    • Migrating to a self-hosted WordPress implementation
    • Migrating to a cloud service

    There are significant benefits for running my own WordPress instance:

    • I now have the ability to combine my website with my blog, no more two separate identities
    • I can monitor the details of traffic profile targeting my website and my blog using Google Analytics
    • I can add multiple plug-ins that are available as opposed to what is being selected by WordPress

    Similarly running the WordPress instance over a cloud service such as Amazon Web Services (AWS) brings significant benefits:

    • Standardized set of tools to manage the services
    • Ability to scale as demand grows
    • Low price of entry, especially to start with it is pretty much free

    Innovation in web technologies in the last 17-18 years has been spectacular. Today anyone with some small amount of aptitude in technology can design and deploy a decent looking website in an afternoon without spending any money. Better yet she can manage and monitor its performance, develop campaigns, monitor take rates and change content based on the adoption and feedback from users. I am not sure if it exists but a Moore’s Law type postulate about the cost for implementing web services must be showing a very rapid price/functionality depreciation in the last 17+ years since the introduction of Netscape. As a matter of fact since we are already at “zero-cost” for amazing level of functionality, it is hard to imagine what would be next.

    Obviously biggest reasons for this spectacular improvement in price/performance are:

    • Major improvements in computing platform performance that allows more complex software to be developed and maintained
    • Orders of magnitude higher bandwidth and storage that allows and encourages sharing programming tools and environments more easily
    • Success of open software that has equalized the world for those who don’t have the means to pay for rich capability sets

    Thanks to this success, especially since the advent of smartphones the concept of “over-the-top” defines a significant portion of innovation within the wireless marketplace. That’s why we have companies like FourSquare, Twitter, Facebook innovating on top of the expanding wireless networks as opposed to mobile operators. Dominance of over-the-top is primarily due to the brute-force of having so many different companies chasing to deliver very similar functionality. On the other hand constraints such as limited spectrum resources, high capital requirements for large geographical coverage, need for retail presence, customer support expectations limit the level of competition in wireless service provider field. With more consolidation being planned, wireless services industry seems to be heading for:

    • Operators vacating the services field to over-the-top providers (to become dreaded pipes)
    • Terminal device manufacturers trying to substitute over-the-top providers (strategies such as Apple’s MobileMe, HTC’s Sense, Motorola’s Blur, etc.)
    • Over-the-top providers trying to get into terminal device business (so far unsuccessfully, such as Google with Nexus)

    Please visit us at the updated website: http://www.wirelesse2e.com. Copies of old articles can be found on the website along with our future posts. I will retire this site before the end of May as long as running WordPress over AWS is as smooth as WordPress cloud.

    Posted in General | Tagged , , , , , , , , | Leave a comment

    iPhone is the Culprit behind AT&T’s Purchase of T-Mobile

    I found yesterday’s news quite hard to believe. Even though AT&T / T-Mobile merger was talked about and discussed many times in the past, more recently it has faded primarily due to AT&T’s size and concerns about how US government would scrutinize such a deal. Considering the initial feedback, those concerns seem to be very valid. Probably every consumer protection group in the US will end up petitioning DOJ and FCC. Similarly many competitors including Verizon and Sprint will object to at least parts of this deal. In the last 4-5 years, T-Mobile has steadily moved in the direction of serving lower income groups who happen to be more cost-concious and less likely to be part of employer phone plans. It is likely that there will be a lot of opposition from those representing T-Mobile customers. I think the US government is going to look at this deal at geographical granularity and will make approvals on a Major Trading Area (MTA) / Business Trading Area (BTA) basis.

    Following tables are from FCC’s 14th annual report on mobile wireless competition which was published in June 2010. First table shows the spectrum assets in the US.

    Second table from the FCC mobile wireless competition report shows the distribution of spectrum (MHz-POP)  among wireless operators.  This table was prepared before AT&T’s purchase of Qualcomm’s 700 MHz assets and FCC’s waiver for LightSquared to build a terrestrial-only system (provided that they can resolve the concerns about GPS jamming.)

    After AT&T / T-Mobile merger, the combined company would own:

    • Over 35% of 700 MHz spectrum (once the Qualcomm purchase is completed)
    • Over 42% of cellular spectrum
    • Over 45% of PCS spectrum
    • Almost 40% of AWS spectrum

    Considering Sprint’s SMR spectrum is stuck with legacy IDEN until 2013 and Clearwire’s use of BRS/EBS is not really going anywhere until they revert to LTE, and commercialize TDD version of LTE, there is not much competition left to AT&T and Verizon duopoly.

    Following two graphs show the MHz-POP share before and after AT&T / T-Mobile merger based on the figures from FCC 2010 report.

    Before the merger:

    After the AT&T / T-Mobile merger:

    I find it quite hard to believe that this merger will get a blanket approval. Instead it will be reviewed on a license block basis. T-Mobile doesn’t add much to AT&T in terms of coverage. On the other hand it will have a very significant impact in terms of capacity. This is primarily due to the much lighter loading of T-Mobile’s spectrum with respect to its customer base. Following figure shows the customer number per MHz-POP for the big seven operators. I excluded Clearwire from this graph since they are in a build-out phase with very few customers as opposed to their vast holdings of spectrum.

    Above chart shows that T-Mobile’s current loading of its spectrum assets (MHz-POP) is about 35% lower than Verizon which is the market leader. Interestingly, T-Mobile’s usage is even lower than MetroPCS and US Cellular. Only Leap has a lower utilization of its spectrum assets compared to T-Mobile. To put it another way, T-Mobile has enough spectrum assets to serve almost 50M subscribers to have a similar utilization as market leaders Verizon and AT&T. Ultimately that deficit of about 15M subscribers forced DT’s hand to sell T-Mobile while making it an equally attractive target for AT&T. Following chart that shows the spectrum utilization after AT&T / T-Mobile merger (assuming all assets are merged) takes place.

    This chart shows the lighter loading of spectrum assets for the combined company. This new spectrum position will allow AT&T to acquire another 12-15M subscribers to achieve parity with Verizon in terms of spectrum usage.

    Alternatively AT&T might be more focused on the growth of data usage on their network and they simply wanted to get additional spectrum (currently used rather lightly) assets to close that gap until technologies such as heterogenous networks and intelligent WiFi traffic off-loading become having more impact.

    Coming back to the title of the article; I strongly believe iPhone is the main culprit behind the merger. Here is why:

    • T-Mobile’s growth has come to a halt after the introduction of iPhone. Without iPhone’s impact T-Mobile would have easily surpassed 45M subscribers mark by now.
    • AT&T’s network woes have become much bigger after they started offering iPhone. Its popularity has impacted the network loading dramatically, i.e., significantly more than what average subscriber numbers show.
    • Lack of AWS spectrum support in AT&T’s HSPA+ devices such as iPhone will limit their ability to use T-Mobile’s 3G network in the short run. However, there are multiple ways to deal with that including the support for AWS spectrum in all AT&T devices including iPhone 5 immediately; starting to re-farm T-Mobile’s PCS spectrum for 3G; accelerating T-Mobile customer’s handset updates to include 850 MHz (cellular) support that will eventually be the 2G spectrum for the combined company.

    Some other predictions/questions:

    • It’s hard to believe Verizon will sit and silently watch all these. If this deal can be approved, what is there to prevent Verizon’s purchase of US Cellular, Metro PCS and Leap Wireless?
    • This deal makes Sprint+Clearwire to be the critical element to competition in the US marketplace. This should give more incentive to those who would want to bet on a third alternative. I am not sure if Google or Intel or cable companies still have the same passion as they did few years ago.
    • Sprint has to resolve the IDEN saga, shut it down rapidly and start moving on the TDD-LTE path right away. Obviously the fundamental question is whether it will find the funding it needs and build the device ecosystem, i.e., will Apple be convinced to build TDD-LTE devices at 2.6 GHz?
    • Cable companies (Comcast, Time-Warner and Cox) has a sizable spectrum asset in the AWS band. It remains to be seen how they will use that in the new world where number of players is reduced to three. Considering Sprint doesn’t own any AWS spectrum, it is hard to see any synergy. On the other hand, Verizon and AT&T are hardly potential partners for cable companies.
    • LightSquared may benefit from this consolidation since there will be desire to create nation-wide competition to big three. However, LightSquared still has some technical hurdles as well as a major deployment challenge to go through. Since both Clearwire and LightSquared have lost T-Mobile or AT&T as potential bidders for their spectrum or services, it is more likely that LightSquared and Clearwire will be competing for wholesale business that must come from others (could Apple, Microsoft or Google be ever a customer of such a network?)
    • Smaller players such as EchoStar in 700 MHz band will be folded into either AT&T or Verizon.

    These are certainly exciting times. Similar to last 15 or so years (period starting with 1995 PCS auctions), US wireless industry will continue to consolidate while generating a lot of new companies and businesses along the way.

    Posted in General | Tagged , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment