Another Cause of Scarcity: Battery Life!

Following graph is from a 2008 EE Times article written by Findley Shearer describing the importance of power management for mobile devices:

It refers to another article that appeared in Freescale’s Technology Forum as the original source of the figure. I am not really sure how the original authors came up with a figure of doubling performance every 8.5 months since this contradicts the following figure from Mario Amaya’s 2008 Master’s Thesis at MIT that I quoted earlier in an earlier post. Amaya calculates the yearly throughput (assuming that’s what they used as performance) growth rate to be closer to 51% for the last 30 years. This would result in a doubling of throughput every 20 months as opposed to 8.5 months as the Freescale article suggests.

Irrespective of their exuberance regarding the progress of wireless throughput, authors (Chen and Barth) of the original paper (“eXtreme Energy Conservation for Mobile Communications”) pointed out two fundamental technology gaps:

  • memory access speed is doubling every 12 years
  • battery energy density is doubling every 10 years

First gap requires a dedicated series of posts on its own. In this post, I will focus on the second gap, namely the impact of progress in battery life.

Throughout the history of microprocessor, starting with the first IBM PC with 4.77 MHz back in 1981 to Intel’s Pentium P4 at 3 GHz back in 2002, engineers have increased the processor clock rate to deliver more performance. However, this performance improvement came at the expense of more heat dissipation, ultimately requiring more power and cooling capabilities. Since 2002, commercial microprocessor development has moved in the direction of multi-core designs, better pipelining, more efficient memory access. Today typical consumer grade devices use ~3.5 GHz clock speed.

On the mobile front, initially a similar rush to clock speed increases were observed. For example, the original iPhone came out with a Samsung/ARM chip running  at 412 MHz (even though it was capable of running at 620 MHz) in 2007. This was increased to 600 MHz for iPhone 3GS in 2009 (it was capable of 833 MHz) and closer to 800 MHz for iPhone 4 which uses Apple A4 System on Chip (SoC) that is used for iPad (at 1 GHz) as well. Similar processor clock rate increases have been observed for devices using Intel, TI, Qualcomm among other semiconductor manufacturers.

History of processors used for server and desktop platforms shows that processors used for mobile devices will hit a similar plateau of clock speed to be able to keep an acceptable battery-life. This will require semiconductor companies to move in the direction of multi-core processors, also known as Symmetric MultiProcessing (SMP). Qualcomm Snapdragon family is a good example of this method. Qualcomm MSM8260/MSM8660 are two SoC with dual application CPUs, each running at 1.2 GHz. TI, Nvidia and ST-Ericsson are other companies with dual application processors in production.

Biggest benefit of SMP is the ability to increase the peak processing power when the underlying application can take advantage of it. The best example is web browsing where multiple threads can be opened up to retrieve different components of a webpage while the content is computed, e.g., Java, Flash, etc. and displayed simultaneously. Especially with the advent of HTML5, multi-threading will be more prevalent and be able to take advantage of SMP.

On the other hand when the application, such as processing streaming video, doesn’t require multi-threading, one of the application processors can simply be powered down to reduce battery consumption.

Other power management techniques such as Dynamic Voltage and Frequency Scaling as well as Adaptive Voltage Scaling can also be applied to application processors to reduce their power consumption.

In a SoC, such as Qualcomm Snapdragon, that includes application processors along with radio baseband processor there are even more opportunities for better power management thanks to cross-layer communication among components. One good example could be solving problems such as fast-dormancy without impacting packet core signaling capacity. In an ideal implementation, OS running on the application processors can detect active and background threads, predict the need for the wireless interface capability and control the baseband processor to put the wireless interface to the appropriate state (ready, standby, idle) or change the radio interface from macro to WiFi based on the predicted application requirements, e.g., a background file transfer that could be more taxing on battery.

In the next post, I’ll be discussing at how all-IP networking introduced with 4G provides new capabilities to overcome limitations of battery life.

Advertisements
This entry was posted in General and tagged , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s