Towards Efficient Scheduling of Concurrent DNN Training and Inferencing on Accelerated Edges

Published in CCGridW, 2023

Abstract: Edge devices are typically used to perform lowlatency DNN inferencing close to the data source. However,with accelerated edge devices and privacy-oriented paradigms like Federated Learning, we can increasingly use them for DNN training too. This can require both training and inference workloads to be run concurrently on an edge device, without compromising on the inference latency. Here, we explore such concurrent scheduling on edge devices, and provide initial results demonstrating the interaction of training and inferencing on latency and throughput. –>

Authors: Prashanthi S K, Vinayaka Hegde and Yogesh Simmhan