All-IP Networking: Effect on Battery Life

Undoubtedly the slowest improving part of a mobile device is its battery. As I discussed in the last post, battery energy density doubles every 10 years. Considering peak throughput grows ~50% per year, there is a significant growing gap.

Before we get into what the impacts of all-IP networking on battery life, let’s first look at the energy density of batteries. Today the highest energy density batteries are made of Lithium-Ion. Claus Daniel described the materials and processing methods for Lithium-Ion batteries in his article in JOM. In summary, Dr. Daniel gives the possible limits of state-of-the-art battery energy density. He quotes 1.35 Amp-hour for a 40 gr. battery. A more recent datasheet from Panasonic quotes 1.95 Amp-hour for a similar weight battery.  iPhone 4 battery at ~1.4 Amp-hour for a 40 gr. and NexusOne battery at 1.4 Amp-hour for 30 gr. are respectable numbers. All of these numbers tell us, today it is possible to pack somewhere between 18-20 KJoule into a Lithium-Ion battery that is state-of-the-art and at an acceptable weight for a mobile device owner.

A newer technology called Lithium-Ion-nanowire is quite promising to make a substantial increase (ideally 4x) in energy density. A Silicon Valley start-up, Amprius, is building a Lithium-Ion where silicon (structured as nano-wires) is used as the anode instead of carbon. Assuming that Amprius solves the re-charging issues, we may anticipate batteries with energy density of 120 KJoule (40 gr.) by 2015 time-frame. Interestingly this corresponds to 3 MJoule/kg while nitro-glycerin (commonly known as dynamite) has an energy density of 6.3 MJoule/kg. That tells me at some point, we will start putting very different labels on phones. 🙂

Now that we have established the energy (and its limits) that we can pack in a mobile phone form factor battery, let’s look at the amount of processing we need to support new wireless standards such as LTE or LTE-Advanced while increasing the peak throughput to first 100-150 Mbit/s and eventually reaching 1 Gbit/s (for 100 MHz channelization).

A great paper summarizing the history of mobile device processing power requirement changes was written by Kees van Berkel. He summarized the relationship between bandwidth and processing power requirements with the following graph:

Considering the power envelope of a typical handheld device around 3W (due to heat dissipation), van Berkel estimates about 1W is available for all the processing while the other 2W is used for power amplifier, RF front-end, auxiliary interfaces as well as display lighting. Therefore, in order to be able to keep up with the requirements of newer air interface technologies, handset designs must be able to increase energy efficiency dramatically. In other words, using the same 1W for 50 GOPS (Giga Operations Per Second) for HSDPA (of 2005), an LTE-Advanced capable handset must be able to perform 2 TOPS (Tera Operations Per Second) by 2015.

Certainly there is a large variety of operations within a handset, including various functions of physical layer, codec functions for different media types, MAC and higher layer protocols, display computation, rendering, etc. Following graph from van Berkel is a description of various hardware types, e.g., microprocessors, DSPs and specialized hardware that can be applied for various operations and their energy requirements with respect to load.

During the last decade all handset devices have adopted the same model of mixed mode computation by using  Systems on Chips containing microprocessors like ARM, various DSP modules as well as dedicated ASICs for physical layer functions.

Even though phenomenal increases in energy efficiency are promised for new microprocessors, primarily due to ever increasing scale of operations, it is quite unlikely that this heterogeneous handset design will change. For example, ARM Cortex-A15 which was announced in September 2010 and forecasted to be available in 2013 will provide an energy efficiency over 3* with respect to ARM11. Based on van Berkel’s estimate of 200 pJoule/ops for ARM11, we can estimate 70 pJoule/ops for Cortex-ARM15. However, even at this rate in order to be able to compute 2 TOPS, we need 140 W power if we attempt to do all operations using multiple Cortex-ARM15s. Considering humans are lacking asbestos covered hands, this is not a feasible solution. 🙂

Computing the energy per operations limit based on the constraint of 1 W, we end up with a weighted average figure of 0.5 pJoule/ops. Today this number is not feasible with multi-core DSPs. Based on the graph above, even for rigidly defined operations like radio front-end or modem, achieving this level of energy efficiency is difficult for programmable ASIC devices. Nevertheless, the objective is to bring the overall weighted average energy efficiency of operations to the target level. I anticipate with the combination of devices like Cortex-ARM15, higher number of multi-core DSPs, and purpose-built ASICs with 16 nm and 11 nm technologies (expected in 2015) it would eventually be possible to yield a figure closer to 0.5 pJ/operations. Using Lithium-Ion batteries, this would result in close to 3 hours of peak throughput (1 Gb/s) over 100 MHz in 2015. Better yet, if Lithium-Ion-nanowire becomes a commercial reality, we will even see over 10 hours of peak download capability by the same time.

One of the important factors that will determine the battery lifetime will be the use of IP transport for voice services. Unlike 2G or 3G systems, LTE and LTE-Advanced provide voice services as an application as opposed to a well-defined bearer type. Currently there are three possible options for voice:

  • Circuit-Switched FallBack (CSFB): It uses the legacy 2G/3G to carry the voice and SMS services while LTE is used purely for data. Currently Metro PCS is using this method. MetroPCS Samsung Craft customers have to give up their data sessions to make or receive a call due to this method. It is certainly not very customer friendly considering one of the biggest premises of 3G was simultaneous voice and data capability.
  • Voice over LTE via Generic Access (VOLGA): It uses an extension of the Unlicensed Mobile Access (UMA) to carry voice and SMS services. Unlike CSFB, it is not a 3GPP standard and its long term viability is limited due to lack of adoption.
  • Voice over LTE (VoLTE) using Multi-Media Telephony (MMTel): It depends on the MMTel / IP Multimedia Subsystem (IMS) standards that have been defined as part of 3GPP standardization since 2002. VoLTE is dependent on specific radio features to increase coverage, capacity and quality.

Since the future of voice is settled to be VoLTE, end-to-end systems designers must look at it for its impact on the battery life. VoLTE requires certain capabilities from enodeB and the user terminal. Some of these are:

  • Reduction of packet size via using Robust Header Compression (ROHC).
  • Use of rapid retransmission for error correction.
  • Using Explicit Congestion Notification (ECN) to adopt speech rate.

Out of these, primarily the ROHC is the source of major operational complexity as opposed to the circuit switched voice bearer processing in 2G or 3G systems. Looking back to this post, we can conclude that ROHC impacts handset battery life as follows:

  • It increases the total amount of operations per second depending on the duty ratio of voice service. Since ROHC must be performed for every voice packet, complexity is directly proportional to the length of the voice calls.
  • It pushes the weighted average energy per operations higher since it must be processed either in a DSP or potentially in an application microprocessor. Since these devices will continue to have much higher energy requirements compared to custom ASICs, this will have a negative impact on the battery life.

Another important item to underline is this impact of VoLTE on potential battery life is due to the desire of having a well-defined voice service over LTE. Alternatively, if one decides to use an over-the-top mechanism such as Skype, Google Talk, Fring, etc., similar operational complexity (due to ROHC) will not be there at the potential expense of reduction in coverage, capacity and quality. I am searching for papers that compare MMTel and over-the-top methods in terms of battery life, coverage, capacity and quality. If you happen to know one, please drop me a note.

Advertisements
This entry was posted in General and tagged , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

4 Responses to All-IP Networking: Effect on Battery Life

  1. Manoj Das says:

    How does battery life co-relates to different access networks? For example: Which access network will affect most to the battery life of the devide 802.11 g/a (20 MHz OFDM) or 20 MHz WiMAX or 20 MHz TDD-LTE assuming similar user throughput disregarding the spectrum band? What is the impact of TDD vs FDD access network on battery life?

    • wirelesse2e says:

      Manoj,

      Battery life is a factor of items like display lighting, application processing, video processing, etc. that are all equal in devices irrespective of the access network used. If we put those aside, then we have other factors like RF front-end, modem that can be pretty similar across the spectrum of technologies you have listed since they all use OFDMA and QPSK, QAM for modulation and you removed the spectrum from consideration. Probably there is still some variation among these technologies with respect to uplink, use of SC-FDMA versus OFDMA and the help it provides for the power amplifier. Certainly other obvious factor is the cell-size one tries to support with any of these technologies. Once again, uplink (reverse link) is the determinant factor for transmit power and in return battery consumption.

      FDD versus TDD is an interesting question. Since typical wireless system is forward link skewed in usage, intuitively it makes sense to have a TDD system where the use of spectrum is similarly skewed for the forward link. This way a handset doesn’t need to spend energy for reverse link if it doesn’t have anything to send. I think this argument favored TDD systems in the past when multiplexing was based on TDMA. However, with SC-FDMA and OFDMA it is possible to adjust the duration of the transmission burst per the amount of data to be sent. Therefore, I don’t believe TDD has any benefit. On the other hand, synchronization requirements of TDD and the limited suitable spectrum make TDD less likely to succeed on its own. However, TDD could be an overlay solution for broadcast type services a la MBMS.

      Thanks for your questions,
      Murat

  2. Michael Phan says:

    Very informative information. In additional impact of All-IP (VoLTE) on batter life I am interested to understand:
    1. network signalling resulted from smart phone chattiness and SIP protocol
    2. cost per bit/spectrum efficient when compare VoLTE with 2G/3G circuit switched voice.

    • wirelesse2e says:

      Michael,

      Thank you for your comment and questions. I’d try to give short answers and some pointers that you can follow for further information:

      1. Amount of battery life impact due to SIP is negligible if you are considering call set-up, tear-down, and vanilla supplementary services messaging. However, for applications such as instant messaging or presence, SIP messaging load is much higher. Especially if you consider the fact that almost all mobile devices use IPv4 using NAT-traversal, the amount of keep-alive signaling is note-worthy. SIP servers force SIP clients to use SIP re-registration as a double-up for keep-alive. Certainly the amount of such signaling determines the impact of battery life. certainly mobile operators can adjust the validity of the NAT translation time-out. Longer that time-out period is, longer SIP re-registration and better battery life. However, the penalty for the mobile operator is to spend more money for more NAT hardware. I suppose a much simpler solution is to bite the bullet and move to IPv6.

      2. VoIP without header compression will always be significantly inefficient compared to circuit-switched voice. That is why an RTP/UDP header compression RFC-2508 was written as early as in 1999. RFC-3095, Robust Header Compression was primarily defined to handle it for wireless links. If you can find a copy of Springer-Verlag’s Wireless Personal Communications (2005), an article by Frank Fitzek and others claim that a bandwidth reduction of almost 50% compared to GSM-EFR is achievable using ROHC. However, as of 2011 I am not aware of a large-scale commercial success using VoIP/ROHC instead of GSM or UMTS CS voice. That tells me reality is quite different primarily due to implementation constraints, battery being one,

      Murat

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s