Cache memories are the most area-and energy-consuming units in today’s microprocessors. As the speed disparity between processor and external memory increases, designers try to put large multilevel caches on a chip to reduce the number of external memory accesses and thus boost the system performance. (See Table 8.1 for a survey of the on-die caches for several recent high-end microprocessors.) On-chip data and instruction caches are implemented using arrays of densely packed static RAM cells. The device count for the caches often exceeds the number of transistors devoted to the processor’s datapath and controller. For example, the Alpha21364  and PA-RISC Maco  microprocessors have over 90% of their transistors in RAM, with most of them dedicated for caches; the Itanium2  has 80% in caches, the IBM G5  has 72%, the PowerPC  has 71%, and Strong-ARM110  has 70%. Due to the large load capacitance and high access rate, these caches account for significant portion of the overall power dissipation (e.g., 35% in Itanium2 ; 43% in Strong-ARM ). Therefore optimizing caches for power is increasingly important. Although much work on energy reduction has taken place in the circuit and technology domains [10,11], interest in cache design for power efficiency at the architectural level continues to increase. Architecture is the entry point in cache design hierarchy, and decisions taken at this level can drastically affect the efficiency of design.
|Title of host publication||Low-Power Processors and Systems on Chips|
|Publication status||Published - Jan 1 2005|
All Science Journal Classification (ASJC) codes