Adaptive cache-line size management on 3D integrated microprocessors

Takatsugu Ono, Inoue Koji, Kazuaki Murakami

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Citations (Scopus)


The memory bandwidth can dramatically be improved by means of stacking the main memory (DRAM) on processor cores and connecting them by wide on-chip buses composed of through silicon vias (TSVs). The 3D stacking makes it possible to reduce the cache miss penalty because large amount of data can be transferred from the main memory to the cache at a time. If a large cache line size is employed, we can expect the effect of prefetching. However, it might worsen the system performance if programs do not have enough spatial localities of memory references. To solve this problem, we introduce software-controllable variable line-size cache scheme. In this paper, we apply it to an L1 data cache with 3D stacked DRAM organization. In our evaluation, it is observed that our approach reduces the L1 data cache and stacked DRAM energy consumption up to 75%, compared to a conventional cache.

Original languageEnglish
Title of host publication2009 International SoC Design Conference, ISOCC 2009
Number of pages4
Publication statusPublished - Dec 1 2009
Event2009 International SoC Design Conference, ISOCC 2009 - Busan, Korea, Republic of
Duration: Nov 22 2009Nov 24 2009


Other2009 International SoC Design Conference, ISOCC 2009
Country/TerritoryKorea, Republic of

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering


Dive into the research topics of 'Adaptive cache-line size management on 3D integrated microprocessors'. Together they form a unique fingerprint.

Cite this