Duan Jie, Wang Xiong, Xu Shizhong, Liu Yuanni, Xu Chuan, Zhao Guofeng
Communication and Information Engineering, Chongqing University of Posts and Telecommunications (CQUPT), Chongqing, China.
Communication and Information Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu, China.
PLoS One. 2016 Jun 30;11(6):e0158260. doi: 10.1371/journal.pone.0158260. eCollection 2016.
Many recent researches focus on ICN (Information-Centric Network), in which named content becomes the first citizen instead of end-host. In ICN, Named content can be further divided into many small sized chunks, and chunk-based communication has merits over content-based communication. The universal in-network cache is one of the fundamental infrastructures for ICN. In this work, a chunk-level cache mechanism based on pre-fetch operation is proposed. The main idea is that, routers with cache store should pre-fetch and cache the next chunks which may be accessed in the near future according to received requests and cache policy for reducing the users' perceived latency. Two pre-fetch driven modes are present to answer when and how to pre-fetch. The LRU (Least Recently Used) is employed for the cache replacement. Simulation results show that the average user perceived latency and hops can be decreased by employed this cache mechanism based on pre-fetch operation. Furthermore, we also demonstrate that the results are influenced by many factors, such as the cache capacity, Zipf parameters and pre-fetch window size.
许多近期的研究聚焦于信息中心网络(ICN),在该网络中,命名内容成为首要元素而非终端主机。在ICN中,命名内容可进一步划分为许多小尺寸的数据块,基于数据块的通信相较于基于内容的通信具有诸多优点。通用的网络内缓存是ICN的基本架构之一。在这项工作中,提出了一种基于预取操作的数据块级缓存机制。其主要思想是,具有缓存的路由器应根据接收到的请求和缓存策略,预取并缓存近期可能会被访问的下一个数据块,以降低用户感知到的延迟。提出了两种预取驱动模式来回答何时以及如何进行预取。采用最近最少使用(LRU)算法进行缓存替换。仿真结果表明,采用这种基于预取操作的缓存机制可降低平均用户感知延迟和跳数。此外,我们还证明结果受许多因素影响,如缓存容量、齐普夫参数和预取窗口大小。