Pham Minh, Li Hao, Yuan Yongke, Mou Chengcheng, Ramachandran Kandethody, Xu Zichen, Tu Yicheng
University of South Florida, Tampa, FL, USA.
Beijing University of Technology, Beijing, China.
ICS. 2022 Jun;2022. doi: 10.1145/3524059.3532387. Epub 2022 Jun 28.
Due to the high level of parallelism, there are unique challenges in developing system software on massively parallel hardware such as GPUs. One such challenge is designing a dynamic memory allocator whose task is to allocate memory chunks to requesting threads at runtime. State-of-the-art GPU memory allocators maintain a global data structure holding metadata to facilitate allocation/deallocation. However, the centralized data structure can easily become a bottleneck in a massively parallel system. In this paper, we present a novel approach for designing dynamic memory allocation without a centralized data structure. The core idea is to let threads follow a random search procedure to locate free pages. Then we further extend to more advanced designs and algorithms that can achieve an order of magnitude improvement over the basic idea. We present mathematical proofs to demonstrate that (1) the basic random search design achieves asymptotically lower latency than the traditional queue-based design and (2) the advanced designs achieve significant improvement over the basic idea. Extensive experiments show consistency to our mathematical models and demonstrate that our solutions can achieve up to two orders of magnitude improvement in latency over the best-known existing solutions.
由于高度的并行性,在诸如GPU等大规模并行硬件上开发系统软件存在独特的挑战。其中一个挑战是设计一个动态内存分配器,其任务是在运行时为请求线程分配内存块。最先进的GPU内存分配器维护一个保存元数据的全局数据结构,以方便内存分配/释放。然而,集中式数据结构在大规模并行系统中很容易成为瓶颈。在本文中,我们提出了一种无需集中式数据结构来设计动态内存分配的新颖方法。核心思想是让线程遵循随机搜索过程来定位空闲页面。然后我们进一步扩展到更先进的设计和算法,相对于基本思想可实现数量级的提升。我们给出数学证明以表明:(1)基本的随机搜索设计比传统的基于队列设计具有渐近更低的延迟;(2)先进设计相对于基本思想有显著改进。大量实验表明与我们的数学模型一致,并证明我们的解决方案在延迟方面相对于最知名的现有解决方案可实现高达两个数量级的提升。