LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Adaptive and Efficient Qubit Allocation Using Reinforcement Learning in Quantum Networks

Photo by hajjidirir from unsplash

Quantum entanglement brings high-speed and inherently privacy-preserving transmission for information communication in quantum networks. The qubit scarcity is an important issue that cannot be ignored in quantum networks due to… Click to show full abstract

Quantum entanglement brings high-speed and inherently privacy-preserving transmission for information communication in quantum networks. The qubit scarcity is an important issue that cannot be ignored in quantum networks due to the limited storage capacity of quantum devices, the short lifespan of qubits, and so on. In this article, we first formulate the qubit competition problem as the Cooperative-Qubit-Allocation-Problem (CQAP) by taking into account both the waiting time and the fidelity of end-to-end entanglement with the given transmission link set. We then model the CQAP as a Markov Decision Process (MDP) and adopt a Reinforcement Learning (RL) algorithm to self-adaptively and cooperatively allocate qubits among quantum repeaters. Further, we introduce an Active Learning (AL) algorithm to improve the efficiency of the RL algorithm by reducing its trial-error times. Simulation results demonstrate that our proposed algorithm outperforms the benchmark algorithms, with 23.5 ms reduction on the average waiting time and 19.2 improvement on the average path maturity degree, respectively.

Keywords: qubit; reinforcement learning; qubit allocation; quantum networks; adaptive efficient

Journal Title: IEEE Network
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.