Deep Reinforcement Learning (DRL)-Based Device-to-Device (D2D) Caching With Blockchain and Mobile Edge Computing
【Author】 Zhang, Ran; Yu, F. Richard; Liu, Jiang; Huang, Tao; Liu, Yunjie
【Source】IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS
【影响因子】8.346
【Abstract】Device-to-Device (D2D) caching assists Mobile Edge Computing (MEC) based caching in offloading inter-domain traffic by sharing cached items with nearby users, while its performance relies heavily on caching nodes' sharing willingness. In this paper, a Blockchain-based Cache and Delivery Market (CDM) is proposed as an incentive mechanism for the distributed caching system. Under given incentive mechanisms, both D2D and MEC caching nodes' willingness is guaranteed by satisfying their expected reward for cache sharing. Besides, for the distributed CDM, content delivery related transactions are executed by smart contracts. To achieve consensus on transactions and prevent frauds, a consensus protocol among the smart contract execution nodes (SCENE) is necessary. To minimize the latency of reaching consensus while guaranteeing its confidence level, we propose partial Practical Byzantine Fault Tolerance (pPBFT) protocol. Further, the model of cache sharing and transaction execution consensus is proposed, and we further formulate caching placement and SCENE selection as Markov Decision Process problems. Due to the complexity and dynamics of the problems, a deep reinforcement learning approach is adopted to solve the problem. The simulation results show that the proposed schemes outperform conventional solutions in terms of traffic offloading, content retrieval latency, and consensus latency.
【Keywords】Device-to-device communication; Smart contracts; Consensus protocol; Streaming media; Wireless communication; Reinforcement learning; Blockchain; smart contract; caching; D2D; MEC; deep reinforcement learning
【发表时间】2020 OCT
【收录时间】2022-01-02
【文献类型】
【主题类别】
--
【DOI】 10.1109/TWC.2020.3003454
评论