Adaptive cache-line size management on 3D integrated microprocessors

Takatsugu Ono, Inoue Koji, Kazuaki Murakami

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Citations (Scopus)

Abstract

The memory bandwidth can dramatically be improved by means of stacking the main memory (DRAM) on processor cores and connecting them by wide on-chip buses composed of through silicon vias (TSVs). The 3D stacking makes it possible to reduce the cache miss penalty because large amount of data can be transferred from the main memory to the cache at a time. If a large cache line size is employed, we can expect the effect of prefetching. However, it might worsen the system performance if programs do not have enough spatial localities of memory references. To solve this problem, we introduce software-controllable variable line-size cache scheme. In this paper, we apply it to an L1 data cache with 3D stacked DRAM organization. In our evaluation, it is observed that our approach reduces the L1 data cache and stacked DRAM energy consumption up to 75%, compared to a conventional cache.

Original languageEnglish
Title of host publication2009 International SoC Design Conference, ISOCC 2009
Pages472-475
Number of pages4
DOIs
Publication statusPublished - Dec 1 2009
Event2009 International SoC Design Conference, ISOCC 2009 - Busan, Korea, Republic of
Duration: Nov 22 2009Nov 24 2009

Other

Other2009 International SoC Design Conference, ISOCC 2009
CountryKorea, Republic of
CityBusan
Period11/22/0911/24/09

Fingerprint

Microprocessor chips
Dynamic random access storage
Data storage equipment
Energy utilization
Bandwidth
Silicon

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Cite this

Ono, T., Koji, I., & Murakami, K. (2009). Adaptive cache-line size management on 3D integrated microprocessors. In 2009 International SoC Design Conference, ISOCC 2009 (pp. 472-475). [5423920] https://doi.org/10.1109/SOCDC.2009.5423920

Adaptive cache-line size management on 3D integrated microprocessors. / Ono, Takatsugu; Koji, Inoue; Murakami, Kazuaki.

2009 International SoC Design Conference, ISOCC 2009. 2009. p. 472-475 5423920.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ono, T, Koji, I & Murakami, K 2009, Adaptive cache-line size management on 3D integrated microprocessors. in 2009 International SoC Design Conference, ISOCC 2009., 5423920, pp. 472-475, 2009 International SoC Design Conference, ISOCC 2009, Busan, Korea, Republic of, 11/22/09. https://doi.org/10.1109/SOCDC.2009.5423920
Ono T, Koji I, Murakami K. Adaptive cache-line size management on 3D integrated microprocessors. In 2009 International SoC Design Conference, ISOCC 2009. 2009. p. 472-475. 5423920 https://doi.org/10.1109/SOCDC.2009.5423920
Ono, Takatsugu ; Koji, Inoue ; Murakami, Kazuaki. / Adaptive cache-line size management on 3D integrated microprocessors. 2009 International SoC Design Conference, ISOCC 2009. 2009. pp. 472-475
@inproceedings{3646723d9fb645f7858cb93132458c66,
title = "Adaptive cache-line size management on 3D integrated microprocessors",
abstract = "The memory bandwidth can dramatically be improved by means of stacking the main memory (DRAM) on processor cores and connecting them by wide on-chip buses composed of through silicon vias (TSVs). The 3D stacking makes it possible to reduce the cache miss penalty because large amount of data can be transferred from the main memory to the cache at a time. If a large cache line size is employed, we can expect the effect of prefetching. However, it might worsen the system performance if programs do not have enough spatial localities of memory references. To solve this problem, we introduce software-controllable variable line-size cache scheme. In this paper, we apply it to an L1 data cache with 3D stacked DRAM organization. In our evaluation, it is observed that our approach reduces the L1 data cache and stacked DRAM energy consumption up to 75{\%}, compared to a conventional cache.",
author = "Takatsugu Ono and Inoue Koji and Kazuaki Murakami",
year = "2009",
month = "12",
day = "1",
doi = "10.1109/SOCDC.2009.5423920",
language = "English",
isbn = "9781424450343",
pages = "472--475",
booktitle = "2009 International SoC Design Conference, ISOCC 2009",

}

TY - GEN

T1 - Adaptive cache-line size management on 3D integrated microprocessors

AU - Ono, Takatsugu

AU - Koji, Inoue

AU - Murakami, Kazuaki

PY - 2009/12/1

Y1 - 2009/12/1

N2 - The memory bandwidth can dramatically be improved by means of stacking the main memory (DRAM) on processor cores and connecting them by wide on-chip buses composed of through silicon vias (TSVs). The 3D stacking makes it possible to reduce the cache miss penalty because large amount of data can be transferred from the main memory to the cache at a time. If a large cache line size is employed, we can expect the effect of prefetching. However, it might worsen the system performance if programs do not have enough spatial localities of memory references. To solve this problem, we introduce software-controllable variable line-size cache scheme. In this paper, we apply it to an L1 data cache with 3D stacked DRAM organization. In our evaluation, it is observed that our approach reduces the L1 data cache and stacked DRAM energy consumption up to 75%, compared to a conventional cache.

AB - The memory bandwidth can dramatically be improved by means of stacking the main memory (DRAM) on processor cores and connecting them by wide on-chip buses composed of through silicon vias (TSVs). The 3D stacking makes it possible to reduce the cache miss penalty because large amount of data can be transferred from the main memory to the cache at a time. If a large cache line size is employed, we can expect the effect of prefetching. However, it might worsen the system performance if programs do not have enough spatial localities of memory references. To solve this problem, we introduce software-controllable variable line-size cache scheme. In this paper, we apply it to an L1 data cache with 3D stacked DRAM organization. In our evaluation, it is observed that our approach reduces the L1 data cache and stacked DRAM energy consumption up to 75%, compared to a conventional cache.

UR - http://www.scopus.com/inward/record.url?scp=77951428838&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77951428838&partnerID=8YFLogxK

U2 - 10.1109/SOCDC.2009.5423920

DO - 10.1109/SOCDC.2009.5423920

M3 - Conference contribution

AN - SCOPUS:77951428838

SN - 9781424450343

SP - 472

EP - 475

BT - 2009 International SoC Design Conference, ISOCC 2009

ER -