Published January 1, 2023 | Version v1
Conference paper Open

Utilizing Prefetch Buffers for Iterative Graph Applications

  • 1. Univ Illinois, Urbana, IL 61801 USA
  • 2. Bilkent Univ, Dept Comp Engn, Ankara, Turkiye

Description

Graph applications are bound to high memory latencies due to the vast graph data they process and irregular memory access patterns. One of the several approaches to overcome this memory bottleneck is graph prefetching. Graph prefetchers can accurately anticipate the irregular memory access patterns observed in graph applications and bring the appropriate data to caches beforehand. Graph prefetching achieves significant performance improvements for well-known graph benchmarks and applications. However, those that aggressively prefetch graph data into the L1 cache suffer from cache pollution and early evictions of valuable data. We propose storing the prefetched graph data in separate prefetch buffers rather than the L1 cache to solve this issue. We suggest a prefetch buffer design for holding the graph data and dispatching it when needed by the CPU. Our technique applies to L1 cache graph prefetchers decoupled with in-order cores, achieving average speedups up to 20%. The prefetch buffers used in our design are minimal compared to the L1 cache, thus bringing negligible storage overheads.

Files

bib-b6f4a218-74f5-40a8-b08b-db5290732f06.txt

Files (160 Bytes)

Name Size Download all
md5:c1f3da24a01573ea208b80e0701db064
160 Bytes Preview Download