Smart cameras that leverage edge-computing (edge cameras) are increasingly being combined with deep neural networks to realize the Artificial Intelligence of Things (AIoT), enabling a smart life with “low-touch services,”… Click to show full abstract
Smart cameras that leverage edge-computing (edge cameras) are increasingly being combined with deep neural networks to realize the Artificial Intelligence of Things (AIoT), enabling a smart life with “low-touch services,” such as unmanned stores. However, deploying many edge cameras and training their models (edge models) in unmanned stores is time consuming and labor intensive. Therefore, studies have utilized transfer learning methods, but training edge models often need powerful servers. Although we previously proposed direct edge-to-edge instance transfer, this approach did not exploit latent features, still required high bandwidth and long training time, and caused privacy leakage. Therefore, we propose direct edge-to-edge many-to-many latent feature transfer learning, which includes elite latent feature (ELF) extraction, direct edge-to-edge one-to-many latent feature transfer learning (DeOmf), and direct edge-to-edge many-to-one latent feature transfer learning (DeMof). Through ELF extraction, DeOmf allows one source edge camera to transfer latent features to multiple target edge cameras, which improves knowledge reuse and accelerates initial model training. Further, DeMof can help exploit the diversity of multiple source edge cameras for one target edge camera. The experimental results show that DeOmf improves accuracy by 6.30%, reduces training time by 32.15%, and saves 22.92% (maximum 83.33%) on transmission cost. In addition, DeMof increases accuracy by 3.42% and saves 66.99% on training time and 56.67% on transmission cost. These improvements can greatly broaden the applicability of our proposed system when comparing to instance-based transfer learning.
               
Click one of the above tabs to view related content.