Edge computing provides benefits to IoT services by offering authentication, security, and other critical functionality at the edge, close to the IoT devices, rather than in centralized clouds. Operations can continue even when cloud servers are unreachable, while the (local) IoT device space remains protected from attacks. Edge computing solutions are often coupled with data filtering tasks or intelligent processing techniques at the edge. However, this introduces further challenges since adapting machine learning techniques to operate on IoT hardware is an open problem and there is no general framework that can easily port typical ML models to IoT devices.
Another line of research explores the possibility to perform an initial phase of data filtering and ML inference on edge devices or on more powerful local gateways, to reduce the size of the data, before sending it to a central location for the final phases of processing. This approach can greatly reduce the load on the cloud and the network while enabling tasks that require some degree of global knowledge to maintain a certain level of privacy. This opens a new area of research, concerned with identifying optimal strategies for distributing tasks within edge and cloud environments, for different types of applications and networks. In this context, edge orchestration is a key factor to ensure optimal utilization of edge resources through energy-, compute-, network-, and quality-aware service placement. Another open research direction relates to optimal data routing to distribute IoT data to the appropriate target services placed within edge-to-cloud continuum considering context and data volume, which will bring significant benefits to AI applications at the edge.