Dagstuhl Seminar 25151
Disruptive Memory Technologies
( Apr 06 – Apr 11, 2025 )
Permalink
Organizers
- Haibo Chen (Shanghai Jiao Tong University, CN)
- Ada Gavrilovska (Georgia Institute of Technology - Atlanta, US)
- Jana Giceva (TU München - Garching, DE)
- Frank Hady (Intel Corporation - Portland, US)
- Olaf Spinczyk (Universität Osnabrück, DE)
Contact
- Marsha Kleinbauer (for scientific matters)
- Simone Schilke (for administrative matters)
Memory is a central component in every computer system. The technological evolution has led to greater capacities and higher speeds, but essential properties of the interface between hardware and software have been unchanged for decades: Main memories were usually passive, largely homogeneous, and volatile. These properties are now so firmly anchored in the expectations of software developers that they manifest in their products.
However, a wave of innovations is currently shattering these assumptions. In this sense, several new memory technologies are disruptive for the entire software industry. For example, new servers combine "high-bandwidth memory" with classic memory modules and CXL enables even more hybrid architectures (non-homogeneous). The "in-/near-memory" computing approaches abandon the traditional Von Neumann architecture and promise enormous performance improvements by allowing a vast number of parallel operations on data objects in or close to the memory (non-passive). Finally, "persistent memory" is available for server systems and embedded systems (non-volatile), which can be used for persistent in-memory data structures or even fully persistent processes.
As always, these innovations arrive with high expectations. And the memory demands of AI make these expectations even higher. But as with all technologies, these innovations encounter system realities. To be useful these innovations must shine through within a full system of hardware and software. But existing software and algorithms are finely polished for existing systems. Breaking the system inertia requires innovations of such impact that they move the architecture with them to achieve much better energy consumption, processing speed, reliability, or cost. Which technologies can deliver at this systems level? How must systems change to enable these advantages? Where is co-optimization across the silicon technology, hardware subsystems, layers of software, and even algorithms warranted? What are the new architecture models we expect and what is the migration path that will lead us there?
This Dagstuhl Seminar with about 40 leading experts from industry and academia will tackle these difficult questions in a holistic fashion. Dagstuhl Seminars are highly interactive. We plan a mix of presentations, where expert knowledge is shared, open brain-storming sessions, and group discussions. Towards the end of the week, we hope to have ideas for shaping future research in this area and groups of participants who follow up on the developed ideas in collaborative research efforts.
Classification
- Databases
- Hardware Architecture
- Operating Systems
Keywords
- Processing in Memory (PIM)
- Persistent Memory (PMem)
- Disaggregated Memory
- Data-centric Computing
- System Software Stack